WorldWideScience

Sample records for supercomputer center ucsd

  1. Supercomputing Centers and Electricity Service Providers

    DEFF Research Database (Denmark)

    Patki, Tapasya; Bates, Natalie; Ghatikar, Girish

    2016-01-01

    Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates this pro......Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates...... from a detailed, quantitative survey-based analysis and compare the perspectives of the European grid and SCs to the ones of the United States (US). We then show that contrary to the expectation, SCs in the US are more open toward cooperating and developing demand-management strategies with their ESPs...... (LRZ). We conclude that perspectives on demand management are dependent on the electricity market and pricing in the geographical region and on the degree of control that a particular SC has in terms of power-purchase negotiation....

  2. Facilitating faculty success: outcomes and cost benefit of the UCSD National Center of Leadership in Academic Medicine.

    Science.gov (United States)

    Wingard, Deborah L; Garman, Karen A; Reznik, Vivian

    2004-10-01

    In 1998, the University of California San Diego (UCSD) was selected as one of four National Centers of Leadership in Academic Medicine (NCLAM) to develop a structured mentoring program for junior faculty. Participants were surveyed at the beginning and end of the seven-month program, and one-four years after. The institution provided financial information. Four primary outcomes associated with participation in NCLAM were assessed: whether participants stayed at UCSD, whether they stayed in academic medicine, improved confidence in skills, and cost-effectiveness. Among 67 participants, 85% remained at UCSD and 93% in academic medicine. Their confidence in skills needed for academic success improved: 53% personal leadership, 19% research, 33% teaching, and 76% administration. Given improved retention rates, savings in recruitment was greater than cost of the program. Structured mentoring can be a cost-effective way to improve skills needed for academic success and retention in academic medicine.

  3. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    Energy Technology Data Exchange (ETDEWEB)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S [Earth Sciences Department. Barcelona Supercomputing Center. Barcelona (Spain); Cuevas, E [Izanaa Atmospheric Research Center. Agencia Estatal de Meteorologia, Tenerife (Spain); Nickovic, S [Atmospheric Research and Environment Branch, World Meteorological Organization, Geneva (Switzerland)], E-mail: carlos.perez@bsc.es

    2009-03-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  4. Visualization at Supercomputing Centers: The Tale of Little Big Iron and the Three Skinny Guys

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; van Rosendale, John; Southard, Dale; Gaither, Kelly; Childs, Hank; Brugger, Eric; Ahern, Sean

    2010-12-01

    Supercomputing Centers (SC's) are unique resources that aim to enable scientific knowledge discovery through the use of large computational resources, the Big Iron. Design, acquisition, installation, and management of the Big Iron are activities that are carefully planned and monitored. Since these Big Iron systems produce a tsunami of data, it is natural to co-locate visualization and analysis infrastructure as part of the same facility. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys does not receive the same level of treatment as that of the Big Iron. The main focus of this article is to explore different aspects of planning, designing, fielding, and maintaining the visualization and analysis infrastructure at supercomputing centers. Some of the questions we explore in this article include:"How should the Little Iron be sized to adequately support visualization and analysis of data coming off the Big Iron?" What sort of capabilities does it need to have?" Related questions concern the size of visualization support staff:"How big should a visualization program be (number of persons) and what should the staff do?" and"How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?"

  5. Assessment techniques for a learning-centered curriculum: evaluation design for adventures in supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Helland, B. [Ames Lab., IA (United States); Summers, B.G. [Oak Ridge National Lab., TN (United States)

    1996-09-01

    As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. The data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.

  6. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee

    2011-11-15

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  7. Research center Juelich to install Germany's most powerful supercomputer new IBM System for science and research will achieve 5.8 trillion computations per second

    CERN Multimedia

    2002-01-01

    "The Research Center Juelich, Germany, and IBM today announced that they have signed a contract for the delivery and installation of a new IBM supercomputer at the Central Institute for Applied Mathematics" (1/2 page).

  8. NSF Commits to Supercomputers.

    Science.gov (United States)

    Waldrop, M. Mitchell

    1985-01-01

    The National Science Foundation (NSF) has allocated at least $200 million over the next five years to support four new supercomputer centers. Issues and trends related to this NSF initiative are examined. (JN)

  9. Astrobiological Studies Plan at UCSD and the University of Buckingham

    Science.gov (United States)

    Gibson, Carl H.; Wickramasinghe, N. Chandra

    2011-10-01

    A UC-HBCU grant is requested to assist undergraduate and masters level HBCU Interns to achieve their professional and academic goals by attending summer school classes at UCSD along with graduate students in the UCSD Astrobiology Studies program, and by also attending a NASA sponsored scientific meeting in San Diego on Astrobiology organized by NASA scientist Richard Hoover (the 14th in a sequence). Hoover has recently published a paper in the Journal of Cosmology claiming extraterrestrial life fossils in three meteorites. Students will attend a workshop to prepare research publications on Astrobiological Science for the Journal of Cosmology or equivalent refereed journal, mentored by UCSD faculty and graduate students as co-authors and referees, all committed to the several months of communication usually required to complete a publishable paper. The program is intended to provide pathways to graduate admissions in the broad range of science and engineering fields, and by exposure to fundamental science and engineering disciplines needed by Astrobiologists. A three year UC-HBCU Astrobiological Studies program is proposed: 2011, 2012 and 2013. Interns would be eligible to enter this program when they become advanced graduate students. A center of excellence in astrobiology is planned for UCSD similar to that Directed by Professor Wickramasinghe for many years with Fred Hoyle at Cardiff University, http://www.astrobiology.cf.ac.uk /chandra1.html. Professor Wickramasinghe's CV is attached as Appendix 1. Figures A2-1,2 of Appendix 2 compare Astrobiology timelines of modern fluid mechanical and astrobiological models of Gibson/Wickramasinghe/Schild of the Journal of Cosmology with standard NASA- CDMHC models. NASA support will be sought to support research and educational aspects of both initiatives. Overload teaching of up to two courses a year by UCSD faculty of key astrobiology courses at either UCSD or at HBCU campuses is authorized by recent guidelines of UCSD

  10. The PFPF program at UCSD

    Science.gov (United States)

    Streichler, Rosalind

    2000-04-01

    The PFPF program at UCSD is designed to provide an opportunity for graduate students in Physics to explore many of the issues involved in college and university teaching and to develop the competencies required of effective college instructors. In addition to developing experience and expertise in college teaching, participants will be introduced to a wide range of faculty responsibilities. This approach to the training of college instructors is based on classroom experience and feedback from experienced teachers in a variety of institutional settings, supplemented by research and theories of how people learn and process information Activities include designing and developing the curriculum for a course the participant is planning to teach, implementing that curriculum by offering the course with a mentor Physics faculty member at a participating institution and beginning the long-term professional development required of new faculty.

  11. Supercomputational science

    CERN Document Server

    Wilson, S

    1990-01-01

    In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School ...

  12. 超级计算中心核心应用的浅析%Brief Exploration on Technical Development of Key Applications at Supercomputing Center

    Institute of Scientific and Technical Information of China (English)

    党岗; 程志全

    2013-01-01

    目前,我国国家级超算中心大多采用“地方政府投资、以市场为导向开展应用”的建设思路,地方政府更关心涉及本地企事业单位的高性能计算应用和服务,超算中心常被用于普通的应用,很难充分发挥超级计算的战略作用.如何让超算中心这艘能力超强的航母生存下来,进而“攻城掠地”,推动技术创新,一直是业内人士研究的课题.初步探讨了国内超算中心核心应用所面临的挑战,提出了超算中心核心应用服务地方建设的几点建议.%National supercomputing centers at China work use building thought of local government investigation, and market-oriented application performing. Supercomputing resources are always applied at general applications,as the local govenment more focuses on the high-performance computing applications and services related to local business, rather than supercomputing working as strategical role in the traditional way. It is a long-term researching topic how to make the supercomputing powerful as a super-carrier active and applicable to benefit the technical innovation. Some challenging technical issues suiting for the superomputing were discussed by taking domestic supercomputing center as the example, and some useful advises were addressed for applying international supercomputing center at local services.

  13. An assessment of worldwide supercomputer usage

    Energy Technology Data Exchange (ETDEWEB)

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  14. TOP500 Supercomputers for November 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  15. Emerging supercomputer architectures

    Energy Technology Data Exchange (ETDEWEB)

    Messina, P.C.

    1987-01-01

    This paper will examine the current and near future trends for commercially available high-performance computers with architectures that differ from the mainstream ''supercomputer'' systems in use for the last few years. These emerging supercomputer architectures are just beginning to have an impact on the field of high performance computing. 7 refs., 1 tab.

  16. A training program for scientific supercomputing users

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  17. Research of Customer Segmentation and Differentiated Services in Supercomputing Center%超级计算中心客户细分及差异化服务策略研究

    Institute of Scientific and Technical Information of China (English)

    赵芸卿

    2013-01-01

    This paper applies K-means method to analyze the data of the customers’ Super-Computer renting information in the Supercomputing Center, achieving a number of sorted groups, and puts differentiated services strategies forward accordingly. As a result, we can allocate our supercomputer resources according to these groups, making the services more effectively and conveniently.%本文以中国科学院计算机网络信息中心超级计算中心(以下简称超级计算中心)客户服务工作为研究对象,运用 K-means 算法对客户进行细分,进而对每类客户群提出相应的差异化服务策略。实施差异化服务策略可以更好地分配资源、提供更有效的客户服务。

  18. Energy sciences supercomputing 1990

    Energy Technology Data Exchange (ETDEWEB)

    Mirin, A.A.; Kaiper, G.V. (eds.)

    1990-01-01

    This report contains papers on the following topics: meeting the computational challenge; lattice gauge theory: probing the standard model; supercomputing for the superconducting super collider; and overview of ongoing studies in climate model diagnosis and intercomparison; MHD simulation of the fueling of a tokamak fusion reactor through the injection of compact toroids; gyrokinetic particle simulation of tokamak plasmas; analyzing chaos: a visual essay in nonlinear dynamics; supercomputing and research in theoretical chemistry; monte carlo simulations of light nuclei; parallel processing; and scientists of the future: learning by doing.

  19. TOP500 Supercomputers for June 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  20. TOP500 Supercomputers for June 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  1. TOP500 Supercomputers for November 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-11-15

    20th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 20th edition of the TOP500 list of the world's fastest supercomputers was released today (November 15, 2002). The Earth Simulator supercomputer installed earlier this year at the Earth Simulator Center in Yokohama, Japan, is with its Linpack benchmark performance of 35.86 Tflop/s (trillions of calculations per second) retains the number one position. The No.2 and No.3 positions are held by two new, identical ASCI Q systems at Los Alamos National Laboratory (7.73Tflop/s each). These systems are built by Hewlett-Packard and based on the Alpha Server SC computer system.

  2. Supercomputers to transform Science

    CERN Multimedia

    2006-01-01

    "New insights into the structure of space and time, climate modeling, and the design of novel drugs, are but a few of the many research areas that will be transforned by the installation of three supercomputers at the Unversity of Bristol." (1/2 page)

  3. Petaflop supercomputers of China

    Institute of Scientific and Technical Information of China (English)

    Guoliang CHEN

    2010-01-01

    @@ After ten years of development, high performance computing (HPC) in China has made remarkable progress. In November, 2010, the NUDT Tianhe-1A and the Dawning Nebulae respectively claimed the 1st and 3rd places in the Top500 Supercomputers List; this recognizes internationally the level that China has achieved in high performance computer manufacturing.

  4. Introduction to Reconfigurable Supercomputing

    CERN Document Server

    Lanzagorta, Marco; Rosenberg, Robert

    2010-01-01

    This book covers technologies, applications, tools, languages, procedures, advantages, and disadvantages of reconfigurable supercomputing using Field Programmable Gate Arrays (FPGAs). The target audience is the community of users of High Performance Computers (HPe who may benefit from porting their applications into a reconfigurable environment. As such, this book is intended to guide the HPC user through the many algorithmic considerations, hardware alternatives, usability issues, programming languages, and design tools that need to be understood before embarking on the creation of reconfigur

  5. Adventures in Supercomputing: An innovative program

    Energy Technology Data Exchange (ETDEWEB)

    Summers, B.G.; Hicks, H.R.; Oliver, C.E.

    1995-06-01

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology and serve as a spur to systemic reform. The Adventures in Supercomputing (AiS) program, sponsored by the Department of Energy, is such a program. Adventures in Supercomputing is a program for high school and middle school teachers. It has helped to change the teaching paradigm of many of the teachers involved in the program from a teacher-centered classroom to a student-centered classroom. ``A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode``. Not only is the process of teaching changed, but evidences of systemic reform are beginning to surface. After describing the program, the authors discuss the teaching strategies being used and the evidences of systemic change in many of the AiS schools in Tennessee.

  6. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  7. Ultrascalable petaflop parallel supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A. (Ridgefield, CT); Chen, Dong (Croton On Hudson, NY); Chiu, George (Cross River, NY); Cipolla, Thomas M. (Katonah, NY); Coteus, Paul W. (Yorktown Heights, NY); Gara, Alan G. (Mount Kisco, NY); Giampapa, Mark E. (Irvington, NY); Hall, Shawn (Pleasantville, NY); Haring, Rudolf A. (Cortlandt Manor, NY); Heidelberger, Philip (Cortlandt Manor, NY); Kopcsay, Gerard V. (Yorktown Heights, NY); Ohmacht, Martin (Yorktown Heights, NY); Salapura, Valentina (Chappaqua, NY); Sugavanam, Krishnan (Mahopac, NY); Takken, Todd (Brewster, NY)

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  8. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  9. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  10. UCSD geothermal chemistry program; Annual progress report, FY 1989

    Energy Technology Data Exchange (ETDEWEB)

    Weare, J.H.

    1989-10-01

    The development of a geothermal resource requires a considerable financial commitment. As in other energy extraction ventures, the security of this investment can be jeopardized by the uncertain behavior of the resource under operating conditions. Many of the most significant problems limiting the development of geothermal power are related to the chemical properties of the high temperature and highly pressured formation fluids from which the energy is extracted. When the pressure and temperature conditions on these fluids are changed either during the production phase (pressure changes) or during the extraction phase (temperature changes) of the operation, the fluids which were originally in equilibrium under the new conditions by precipitation of solid materials (scales) or release of dissolved gases (some toxic) in the formation and well bores or in the plant equipment. Unfortunately, predicting the behavior of the production fluids is difficult, because it is a function of many variables. In order to address these problems the Department of Energy is developing a computer model describing the chemistry of geothermal fluids. The model under development at UCSD is based on recent progress in the physical chemistry of concentrated aqueous solutions, and is covered in this report.

  11. The Erasmus Computing Grid - Building a Super-Computer Virtually for Free at the Erasmus Medical Center and the Hogeschool Rotterdam

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop

  12. The Erasmus Computing Grid - Building a Super-Computer Virtually for Free at the Erasmus Medical Center and the Hogeschool Rotterdam

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing

  13. Computational Dimensionalities of Global Supercomputing

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2013-12-01

    Full Text Available This Invited Paper pertains to subject of my Plenary Keynote Speech at the 17th World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI 2013 held in Orlando, Florida on July 9-12, 2013. The title of my Plenary Keynote Speech was: "Dimensionalities of Computation: from Global Supercomputing to Data, Text and Web Mining" but this Invited Paper will focus only on the "Computational Dimensionalities of Global Supercomputing" and is based upon a summary of the contents of several individual articles that have been previously written with myself as lead author and published in [75], [76], [77], [78], [79], [80] and [11]. The topics of these of the Plenary Speech included Overview of Current Research in Global Supercomputing [75], Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing [76], Data Mining Supercomputing with SAS™ JMP® Genomics ([77], [79], [80], and Visualization by Supercomputing Data Mining [81]. ______________________ [11.] Committee on the Future of Supercomputing, National Research Council (2003, The Future of Supercomputing: An Interim Report, ISBN-13: 978-0-309-09016- 2, http://www.nap.edu/catalog/10784.html [75.] Segall, Richard S.; Zhang, Qingyu and Cook, Jeffrey S.(2013, "Overview of Current Research in Global Supercomputing", Proceedings of Forty- Fourth Meeting of Southwest Decision Sciences Institute (SWDSI, Albuquerque, NM, March 12-16, 2013. [76.] Segall, Richard S. and Zhang, Qingyu (2010, "Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing", Proceedings of 5th INFORMS Workshop on Data Mining and Health Informatics, Austin, TX, November 6, 2010. [77.] Segall, Richard S., Zhang, Qingyu and Pierce, Ryan M.(2010, "Data Mining Supercomputing with SAS™ JMP®; Genomics: Research-in-Progress, Proceedings of 2010 Conference on Applied Research in Information Technology, sponsored by

  14. Microprocessors: from desktops to supercomputers.

    Science.gov (United States)

    Baskett, F; Hennessy, J L

    1993-08-13

    Continuing improvements in integrated circuit technology and computer architecture have driven microprocessors to performance levels that rival those of supercomputers-at a fraction of the price. The use of sophisticated memory hierarchies enables microprocessor-based machines to have very large memories built from commodity dynamic random access memory while retaining the high bandwidth and low access time needed in a high-performance machine. Parallel processors composed of these high-performance microprocessors are becoming the supercomputing technology of choice for scientific and engineering applications. The challenges for these new supercomputers have been in developing multiprocessor architectures that are easy to program and that deliver high performance without extraordinary programming efforts by users. Recent progress in multiprocessor architecture has led to ways to meet these challenges.

  15. World's fastest supercomputer opens up to users

    Science.gov (United States)

    Xin, Ling

    2016-08-01

    China's latest supercomputer - Sunway TaihuLight - has claimed the crown as the world's fastest computer according to the latest TOP500 list, released at the International Supercomputer Conference in Frankfurt in late June.

  16. Improved Access to Supercomputers Boosts Chemical Applications.

    Science.gov (United States)

    Borman, Stu

    1989-01-01

    Supercomputing is described in terms of computing power and abilities. The increase in availability of supercomputers for use in chemical calculations and modeling are reported. Efforts of the National Science Foundation and Cray Research are highlighted. (CW)

  17. Community-Based Science: A Response to UCSD's Ongoing Racism Crisis

    Science.gov (United States)

    Werner, B.; Barraza, A.; Macgurn, R.

    2010-12-01

    In February, 2010, the University of California - San Diego's long simmering racism crisis erupted in response to a series of racist provocations, including a fraternity party titled "The Compton Cookout" and a noose discovered in the main library. Student groups led by the Black Student Union organized a series of protests, occupations and discussions highlighting the situation at UCSD (including the low fraction of African American students: 1.3%), and pressuring the university to take action. Extensive interviews (March-May, 2010) with participants in the protests indicate that most felt the UCSD senior administration's response was inadequate and failed to address the underlying causes of the crisis. In an attempt to contribute to a more welcoming university that connects to working class communities of color, we have developed an educational program directed towards students in the environmental- and geo-sciences that seeks to establish genuine, two-way links between students and working people, with a focus on City Heights, a multi-ethnic, multi-lingual diverse immigrant community 20 miles from UCSD. Elements of the program include: --critiquing research universities and their connection to working class communities --learning about and discussing issues affecting City Heights, including community, environmental racism, health and traditional knowledge; --interviewing organizers and activists to find out about the stories and struggles of the community; --working on joint projects affecting environmental quality in City Heights with high school students; --partnering with individual high school students to develop a proposal for a joint science project of mutual interest; --developing a proposal for how UCSD could change to better interface with City Heights. An assessment of the impact of the program on individual community members and UCSD students and on developing enduring links between City Heights and UCSD will be presented followed by a preliminary

  18. Desktop supercomputers. Advance medical imaging.

    Science.gov (United States)

    Frisiello, R S

    1991-02-01

    Medical imaging tools that radiologists as well as a wide range of clinicians and healthcare professionals have come to depend upon are emerging into the next phase of functionality. The strides being made in supercomputing technologies--including reduction of size and price--are pushing medical imaging to a new level of accuracy and functionality.

  19. A workbench for tera-flop supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Resch, M.M.; Kuester, U.; Mueller, M.S.; Lang, U. [High Performance Computing Center Stuttgart (HLRS), Stuttgart (Germany)

    2003-07-01

    Supercomputers currently reach a peak performance in the range of TFlop/s. With but one exception - the Japanese Earth Simulator - none of these systems has so far been able to also show a level of sustained performance for a variety of applications that comes close to the peak performance. Sustained TFlop/s are therefore rarely seen. The reasons are manifold and are well known: Bandwidth and latency both for main memory and for the internal network are the key internal technical problems. Cache hierarchies with large caches can bring relief but are no remedy to the problem. However, there are not only technical problems that inhibit the full exploitation by scientists of the potential of modern supercomputers. More and more organizational issues come to the forefront. This paper shows the approach of the High Performance Computing Center Stuttgart (HLRS) to deliver a sustained performance of TFlop/s for a wide range of applications from a large group of users spread over Germany. The core of the concept is the role of the data. Around this we design a simulation workbench that hides the complexity of interacting computers, networks and file systems from the user. (authors)

  20. The GF11 supercomputer

    Science.gov (United States)

    Beetem, J.; Denneau, M.; Weingarten, D.

    1987-01-01

    GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating-point processors arranged in a modified SIMD architecture. Each has space for 2 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 Gflops. The floating-point processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics.

  1. Automatic discovery of the communication network topology for building a supercomputer model

    Science.gov (United States)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  2. Integration of Panda Workload Management System with supercomputers

    Science.gov (United States)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  3. Seismic signal processing on heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas

    2015-04-01

    The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that

  4. Data Analysis and Assessment Center

    Data.gov (United States)

    Federal Laboratory Consortium — The DoD Supercomputing Resource Center (DSRC) Data Analysis and Assessment Center (DAAC) provides classified facilities to enhance customer interactions with the ARL...

  5. TOP500 Supercomputers for June 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  6. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    Science.gov (United States)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  7. INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Maeno, T [Brookhaven National Laboratory (BNL); Mashinistov, R. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Nilsson, P [Brookhaven National Laboratory (BNL); Novikov, A. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Poyda, A. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Ryabinkin, E. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Teslyuk, A. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Tsulaia, V. [Lawrence Berkeley National Laboratory (LBNL); Velikhov, V. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Wen, G. [University of Wisconsin, Madison; Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a variety of

  8. Scientists turn to supercomputers for knowledge about universe

    CERN Multimedia

    White, G

    2003-01-01

    The DOE is funding the computers at the Center for Astrophysical Thermonuclear Flashes which is based at the University of Chicago and uses supercomputers at the nation's weapons labs to study explosions in and on certain stars. The DOE is picking up the project's bill in the hope that the work will help the agency learn to better simulate the blasts of nuclear warheads (1 page).

  9. Study of ATLAS TRT performance with GRID and supercomputers

    Science.gov (United States)

    Krasnopevtsev, D. V.; Klimentov, A. A.; Mashinistov, R. Yu.; Belyaev, N. L.; Ryabinkin, E. A.

    2016-09-01

    One of the most important studies dedicated to be solved for ATLAS physical analysis is a reconstruction of proton-proton events with large number of interactions in Transition Radiation Tracker. Paper includes Transition Radiation Tracker performance results obtained with the usage of the ATLAS GRID and Kurchatov Institute's Data Processing Center including Tier-1 grid site and supercomputer as well as analysis of CPU efficiency during these studies.

  10. INTEL: Intel based systems move up in supercomputing ranks

    CERN Multimedia

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  11. Comparing Clusters and Supercomputers for Lattice QCD

    CERN Document Server

    Gottlieb, S

    2001-01-01

    Since the development of the Beowulf project to build a parallel computer from commodity PC components, there have been many such clusters built. The MILC QCD code has been run on a variety of clusters and supercomputers. Key design features are identified, and the cost effectiveness of clusters and supercomputers are compared.

  12. Low Cost Supercomputer for Applications in Physics

    Science.gov (United States)

    Ahmed, Maqsood; Ahmed, Rashid; Saeed, M. Alam; Rashid, Haris; Fazal-e-Aleem

    2007-02-01

    Using parallel processing technique and commodity hardware, Beowulf supercomputers can be built at a much lower cost. Research organizations and educational institutions are using this technique to build their own high performance clusters. In this paper we discuss the architecture and design of Beowulf supercomputer and our own experience of building BURRAQ cluster.

  13. University of California, San Diego (UCSD) Sky Imager Cloud Position Study Field Campaign Report

    Energy Technology Data Exchange (ETDEWEB)

    Kleissl, J. [Univ. of California, San Diego, CA (United States); Urquhart, B. [Univ. of California, San Diego, CA (United States); Ghonima, M. [Univ. of California, San Diego, CA (United States); Dahlin, E. [Univ. of California, San Diego, CA (United States); Nguyen, A. [Univ. of California, San Diego, CA (United States); Kurtz, B. [Univ. of California, San Diego, CA (United States); Chow, C. W. [Univ. of California, San Diego, CA (United States); Mejia, F. A. [Univ. of California, San Diego, CA (United States)

    2016-04-01

    During the University of California, San Diego (UCSD) Sky Imager Cloud Position Study, two University of California, San Diego Sky Imagers (USI) (Figure 1) were deployed the U.S. Department of Energy(DOE)’s Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains SGP) research facility. The UCSD Sky Imagers were placed 1.7 km apart to allow for stereographic determination of the cloud height for clouds over approximately 1.5 km. Images with a 180-degree field of view were captured from both systems during daylight hours every 30 seconds beginning on March 11, 2013 and ending on November 4, 2013. The spatial resolution of the images was 1,748 × 1,748, and the intensity resolution was 16 bits using a high-dynamic-range capture process. The cameras use a fisheye lens, so the images are distorted following an equisolid angle projection.

  14. TOP500 Supercomputers for June 2005

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  15. 16 million [pounds] investment for 'virtual supercomputer'

    CERN Multimedia

    Holland, C

    2003-01-01

    "The Particle Physics and Astronomy Research Council is to spend 16million [pounds] to create a massive computing Grid, equivalent to the world's second largest supercomputer after Japan's Earth Simulator computer" (1/2 page)

  16. Supercomputers open window of opportunity for nursing.

    Science.gov (United States)

    Meintz, S L

    1993-01-01

    A window of opportunity was opened for nurse researchers with the High Performance Computing and Communications (HPCC) initiative in President Bush's 1992 fiscal-year budget. Nursing research moved into the high-performance computing environment through the University of Nevada Las Vegas/Cray Project for Nursing and Health Data Research (PNHDR). USing the CRAY YMP 2/216 supercomputer, the PNHDR established the validity of a supercomputer platform for nursing research. In addition, the research has identified a paradigm shift in statistical analysis, delineated actual and potential barriers to nursing research in a supercomputing environment, conceptualized a new branch of nursing science called Nurmetrics, and discovered new avenue for nursing research utilizing supercomputing tools.

  17. TOP500 Supercomputers for November 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  18. Misleading Performance Reporting in the Supercomputing Field

    Directory of Open Access Journals (Sweden)

    David H. Bailey

    1992-01-01

    Full Text Available In a previous humorous note, I outlined 12 ways in which performance figures for scientific supercomputers can be distorted. In this paper, the problem of potentially misleading performance reporting is discussed in detail. Included are some examples that have appeared in recent published scientific papers. This paper also includes some proposed guidelines for reporting performance, the adoption of which would raise the level of professionalism and reduce the level of confusion in the field of supercomputing.

  19. Simulating Galactic Winds on Supercomputers

    Science.gov (United States)

    Schneider, Evan

    2017-01-01

    Galactic winds are a ubiquitous feature of rapidly star-forming galaxies. Observations of nearby galaxies have shown that winds are complex, multiphase phenomena, comprised of outflowing gas at a large range of densities, temperatures, and velocities. Describing how starburst-driven outflows originate, evolve, and affect the circumgalactic medium and gas supply of galaxies is an important challenge for theories of galaxy evolution. In this talk, I will discuss how we are using a new hydrodynamics code, Cholla, to improve our understanding of galactic winds. Cholla is a massively parallel, GPU-based code that takes advantage of specialized hardware on the newest generation of supercomputers. With Cholla, we can perform large, three-dimensional simulations of multiphase outflows, allowing us to track the coupling of mass and momentum between gas phases across hundreds of parsecs at sub-parsec resolution. The results of our recent simulations demonstrate that the evolution of cool gas in galactic winds is highly dependent on the initial structure of embedded clouds. In particular, we find that turbulent density structures lead to more efficient mass transfer from cool to hot phases of the wind. I will discuss the implications of our results both for the incorporation of winds into cosmological simulations, and for interpretations of observed multiphase winds and the circumgalatic medium of nearby galaxies.

  20. GREEN SUPERCOMPUTING IN A DESKTOP BOX

    Energy Technology Data Exchange (ETDEWEB)

    HSU, CHUNG-HSING [Los Alamos National Laboratory; FENG, WU-CHUN [NON LANL; CHING, AVERY [NON LANL

    2007-01-17

    The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicated computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.

  1. User Centered System Design. Part 2. Collected Papers from the UCSD HMI Project.

    Science.gov (United States)

    1984-03-01

    over 90% of programs called indirectly at the cost of losing rare cass of individuals calling these programs directly. A second consequence of...distinguished from the sense it has acquired from ’task analysis," a powerful tool for studying interfaces ( Kieras & Poison, 1982; Bannon et al., * ’,-’ 1983...printing task world contained in the attributes are extremely useful for documenting the existing printing tools ( Kieras & Poison, 1982). In O’Malley

  2. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    Science.gov (United States)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  3. Integrating with users is one thing, but living with them? A case study on loss of space from the Medical Center Library, University of California, San Diego.

    Science.gov (United States)

    Haynes, Craig

    2010-01-01

    The University of California, San Diego (UCSD) Medical Center is the primary hospital for the UCSD School of Medicine. The UCSD Medical Center Library (MCL), a branch of the campus's biomedical library, is located on the medical center campus. In 2007, the medical center administration made a request to MCL for space in its facility to relocate pharmacy administration from the hospital tower. The university librarian brought together a team of library managers to deliberate and develop a proposal, which ultimately accommodated the medical center's request and enhanced some of MCL's public services.

  4. Input/output behavior of supercomputing applications

    Science.gov (United States)

    Miller, Ethan L.

    1991-01-01

    The collection and analysis of supercomputer I/O traces and their use in a collection of buffering and caching simulations are described. This serves two purposes. First, it gives a model of how individual applications running on supercomputers request file system I/O, allowing system designer to optimize I/O hardware and file system algorithms to that model. Second, the buffering simulations show what resources are needed to maximize the CPU utilization of a supercomputer given a very bursty I/O request rate. By using read-ahead and write-behind in a large solid stated disk, one or two applications were sufficient to fully utilize a Cray Y-MP CPU.

  5. GPUs: An Oasis in the Supercomputing Desert

    CERN Document Server

    Kamleh, Waseem

    2012-01-01

    A novel metric is introduced to compare the supercomputing resources available to academic researchers on a national basis. Data from the supercomputing Top 500 and the top 500 universities in the Academic Ranking of World Universities (ARWU) are combined to form the proposed "500/500" score for a given country. Australia scores poorly in the 500/500 metric when compared with other countries with a similar ARWU ranking, an indication that HPC-based researchers in Australia are at a relative disadvantage with respect to their overseas competitors. For HPC problems where single precision is sufficient, commodity GPUs provide a cost-effective means of quenching the computational thirst of otherwise parched Lattice practitioners traversing the Australian supercomputing desert. We explore some of the more difficult terrain in single precision territory, finding that BiCGStab is unreliable in single precision at large lattice sizes. We test the CGNE and CGNR forms of the conjugate gradient method on the normal equa...

  6. Floating point arithmetic in future supercomputers

    Science.gov (United States)

    Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.

    1989-01-01

    Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.

  7. The Origin of the UCSD X-ray Astronomy Program - A Personal Perspective

    Science.gov (United States)

    Peterson, Laurence E.

    2013-01-01

    I was a graduate student in the late 1950’s at the University of Minnesota in the Cosmic Ray Group under Prof. John R. Winckler. He had a project monitoring Cosmic ray time variations from an extensive series of balloon flights using simple detectors during the International Geophysical Year 1957-58. During the 20 March 1958 flight, a short 18 sec. burst of high energy radiation was observed simultaneously with a class II Solar flare. From the ratio of the Geiger counter rate to the energy loss in the ionization chamber, it was determined this radiation was likely hard X-rays or low-energy gamma rays and not energetic particles. Further analysis using information from other concurrent observations indicated the X-rays were likely due to Bremsstrahlung from energetic electrons accelerated in the solar flare magnetic field; these same electrons produced radio emissions. This first detection of extra-terrestrial X- or gamma rays showed the importance of non-thermal processes in Astrophysical phenomena. Winckler and I were interested by the possibility of non-solar hard X-rays. While completing my thesis on a Cosmic ray topic, I initiated a balloon program to develop more sensitive collimated low-background scintillation counters. This led to a proposal to the newly formed NASA to place an exploratory instrument on the 1st Orbiting Solar Observatory launched 7 March 1962. In August that year, I assumed a tenure-track position at UCSD; the data analysis of OSO-1 and the balloon program were transferred to UCSD to initiate the X-ray Astronomy program. The discovery of Cosmic X-ray sources in the 1-10 Kev range on a rocket flight in June 1962 by Giacconi and colleagues gave impetus to the UCSD activities. It seemed evident cosmic X-ray sources could be detected above 20 Kev using high-flying balloons. Early results included measurements of the 50 million K gas in SCO X-1, and the X-ray continuum from the Crab Nebula characterized by a power-law dN/dE ~ E-2.2. The

  8. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Klimentov, A [Brookhaven National Laboratory (BNL); Maeno, T [Brookhaven National Laboratory (BNL); Nilsson, P [Brookhaven National Laboratory (BNL); Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  9. UCSD's Institute of Engineering in Medicine: fostering collaboration through research and education.

    Science.gov (United States)

    Chien, Shu

    2012-07-01

    The University of California, San Diego (UCSD) was established in 1961 as a new research university that emphasizes innovation, excellence, and interdisciplinary research and education. It has a School of Medicine (SOM) and the Jacobs School of Engineering (JSOE) in close proximity, and both schools have national rankings among the top 15. In 1991, with the support of the Whitaker Foundation, the Whitaker Institute of Biomedical Engineering was formed to foster collaborations in research and education. In 2008, the university extended the collaboration further by establishing the Institute of Engineering in Medicine (IEM), with the mission of accelerating the discoveries of novel science and technology to enhance health care through teamwork between engineering and medicine, and facilitating the translation of innovative technologies for delivery to the public through clinical application and commercialization.

  10. Internal computational fluid mechanics on supercomputers for aerospace propulsion systems

    Science.gov (United States)

    Andersen, Bernhard H.; Benson, Thomas J.

    1987-01-01

    The accurate calculation of three-dimensional internal flowfields for application towards aerospace propulsion systems requires computational resources available only on supercomputers. A survey is presented of three-dimensional calculations of hypersonic, transonic, and subsonic internal flowfields conducted at the Lewis Research Center. A steady state Parabolized Navier-Stokes (PNS) solution of flow in a Mach 5.0, mixed compression inlet, a Navier-Stokes solution of flow in the vicinity of a terminal shock, and a PNS solution of flow in a diffusing S-bend with vortex generators are presented and discussed. All of these calculations were performed on either the NAS Cray-2 or the Lewis Research Center Cray XMP.

  11. Refinement of herpesvirus B-capsid structure on parallel supercomputers.

    Science.gov (United States)

    Zhou, Z H; Chiu, W; Haskell, K; Spears, H; Jakana, J; Rixon, F J; Scott, L R

    1998-01-01

    Electron cryomicroscopy and icosahedral reconstruction are used to obtain the three-dimensional structure of the 1250-A-diameter herpesvirus B-capsid. The centers and orientations of particles in focal pairs of 400-kV, spot-scan micrographs are determined and iteratively refined by common-lines-based local and global refinement procedures. We describe the rationale behind choosing shared-memory multiprocessor computers for executing the global refinement, which is the most computationally intensive step in the reconstruction procedure. This refinement has been implemented on three different shared-memory supercomputers. The speedup and efficiency are evaluated by using test data sets with different numbers of particles and processors. Using this parallel refinement program, we refine the herpesvirus B-capsid from 355-particle images to 13-A resolution. The map shows new structural features and interactions of the protein subunits in the three distinct morphological units: penton, hexon, and triplex of this T = 16 icosahedral particle.

  12. Data-intensive computing on numerically-insensitive supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Ahrens, James P [Los Alamos National Laboratory; Fasel, Patricia K [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Heitmann, Katrin [Los Alamos National Laboratory; Lo, Li - Ta [Los Alamos National Laboratory; Patchett, John M [Los Alamos National Laboratory; Williams, Sean J [Los Alamos National Laboratory; Woodring, Jonathan L [Los Alamos National Laboratory; Wu, Joshua [Los Alamos National Laboratory; Hsu, Chung - Hsing [ONL

    2010-12-03

    With the advent of the era of petascale supercomputing, via the delivery of the Roadrunner supercomputing platform at Los Alamos National Laboratory, there is a pressing need to address the problem of visualizing massive petascale-sized results. In this presentation, I discuss progress on a number of approaches including in-situ analysis, multi-resolution out-of-core streaming and interactive rendering on the supercomputing platform. These approaches are placed in context by the emerging area of data-intensive supercomputing.

  13. Parallel supercomputers for lattice gauge theory.

    Science.gov (United States)

    Brown, F R; Christ, N H

    1988-03-18

    During the past 10 years, particle physicists have increasingly employed numerical simulation to answer fundamental theoretical questions about the properties of quarks and gluons. The enormous computer resources required by quantum chromodynamic calculations have inspired the design and construction of very powerful, highly parallel, dedicated computers optimized for this work. This article gives a brief description of the numerical structure and current status of these large-scale lattice gauge theory calculations, with emphasis on the computational demands they make. The architecture, present state, and potential of these special-purpose supercomputers is described. It is argued that a numerical solution of low energy quantum chromodynamics may well be achieved by these machines.

  14. US Department of Energy High School Student Supercomputing Honors Program: A follow-up assessment

    Energy Technology Data Exchange (ETDEWEB)

    1987-01-01

    The US DOE High School Student Supercomputing Honors Program was designed to recognize high school students with superior skills in mathematics and computer science and to provide them with formal training and experience with advanced computer equipment. This document reports on the participants who attended the first such program, which was held at the National Magnetic Fusion Energy Computer Center at the Lawrence Livermore National Laboratory (LLNL) during August 1985.

  15. Multi-petascale highly efficient parallel supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O' Brien, John K.; O' Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  16. Design and update of a classification system: the UCSD map of science.

    Directory of Open Access Journals (Sweden)

    Katy Börner

    Full Text Available Global maps of science can be used as a reference system to chart career trajectories, the location of emerging research frontiers, or the expertise profiles of institutes or nations. This paper details data preparation, analysis, and layout performed when designing and subsequently updating the UCSD map of science and classification system. The original classification and map use 7.2 million papers and their references from Elsevier's Scopus (about 15,000 source titles, 2001-2005 and Thomson Reuters' Web of Science (WoS Science, Social Science, Arts & Humanities Citation Indexes (about 9,000 source titles, 2001-2004-about 16,000 unique source titles. The updated map and classification adds six years (2005-2010 of WoS data and three years (2006-2008 from Scopus to the existing category structure-increasing the number of source titles to about 25,000. To our knowledge, this is the first time that a widely used map of science was updated. A comparison of the original 5-year and the new 10-year maps and classification system show (i an increase in the total number of journals that can be mapped by 9,409 journals (social sciences had a 80% increase, humanities a 119% increase, medical (32% and natural science (74%, (ii a simplification of the map by assigning all but five highly interdisciplinary journals to exactly one discipline, (iii a more even distribution of journals over the 554 subdisciplines and 13 disciplines when calculating the coefficient of variation, and (iv a better reflection of journal clusters when compared with paper-level citation data. When evaluating the map with a listing of desirable features for maps of science, the updated map is shown to have higher mapping accuracy, easier understandability as fewer journals are multiply classified, and higher usability for the generation of data overlays, among others.

  17. National Weather Service, Emergency Medical Services, Scripps Institution of Oceanography/UCSD and California EPA Collaboration on Heat Health Impact and Public Notification for San Diego County

    Science.gov (United States)

    Tardy, A. O.; Corcus, I.; Guirguis, K.

    2015-12-01

    The National Weather Service (NWS) has issued official heat alerts in the form of either a heat advisory or excessive heat warning product to the public and core partners for many years. This information has traditionally been developed through the use of triggers for heat indices which combine humidity and temperature. The criteria typically used numeric thresholds and did not consider impact from a particular heat episode, nor did it factor seasonality or population acclimation. In 2013, the Scripps Institution of Oceanography, University of California, San Diego in collaboration with the Office of Environmental Health Hazard Assessment, of the California Environmental Protection Agency and the NWS completed a study of heat health impact in California, while the NWS San Diego office began modifying their criteria towards departure from climatological normal with much less dependence on humidity or heat index. The NWS changes were based on initial findings from the California Department of Public Health, EpiCenter California Injury Data Online system which documents heat health impacts. Results from the UCSD study were finalized and published in 2014; they supported the need for significant modification of the traditional criteria. In order to better understand the impacts of heat on community health, medical outcome data were provided by the County of San Diego Emergency Medical Services Branch, which is charged by the County's Public Health Officer to monitor heat-related illness and injury daily from June through September. The data were combined with UCSD research to inform the modification of local NWS heat criteria and establish trigger points to pilot new procedures for the issuance of heat alerts. Finally, practices and procedures were customized for each of the county health departments in the NWS area of responsibility across extreme southwest California counties in collaboration with their Office of Emergency Services. The end result of the

  18. Most Social Scientists Shun Free Use of Supercomputers.

    Science.gov (United States)

    Kiernan, Vincent

    1998-01-01

    Social scientists, who frequently complain that the federal government spends too little on them, are passing up what scholars in the physical and natural sciences see as the government's best give-aways: free access to supercomputers. Some social scientists say the supercomputers are difficult to use; others find desktop computers provide…

  19. Supercomputing - Use Cases, Advances, The Future (2/2)

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the second day, we will focus on software and software paradigms driving supercomputers, workloads that need supercomputing treatment, advances in technology and possible future developments. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and i...

  20. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Science.gov (United States)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  1. Will Your Next Supercomputer Come from Costco?

    Energy Technology Data Exchange (ETDEWEB)

    Farber, Rob

    2007-04-15

    A fun topic for April, one that is not an April fool’s joke, is that you can purchase a commodity 200+ Gflop (single-precision) Linux supercomputer for around $600 from your favorite electronic vendor. Yes, it’s true. Just walk in and ask for a Sony Playstation 3 (PS3), take it home and install Linux on it. IBM has provided an excellent tutorial for installing Linux and building applications at http://www-128.ibm.com/developerworks/power/library/pa-linuxps3-1. If you want to raise some eyebrows at work, then submit a purchase request for a Sony PS3 game console and watch the reactions as your paperwork wends its way through the procurement process.

  2. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  3. Multiprocessing on supercomputers for computational aerodynamics

    Science.gov (United States)

    Yarrow, Maurice; Mehta, Unmeel B.

    1991-01-01

    Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.

  4. The PMS project Poor Man's Supercomputer

    CERN Document Server

    Csikor, Ferenc; Hegedüs, P; Horváth, V K; Katz, S D; Piróth, A

    2001-01-01

    We briefly describe the Poor Man's Supercomputer (PMS) project that is carried out at Eotvos University, Budapest. The goal is to develop a cost effective, scalable, fast parallel computer to perform numerical calculations of physical problems that can be implemented on a lattice with nearest neighbour interactions. To reach this goal we developed the PMS architecture using PC components and designed a special, low cost communication hardware and the driver software for Linux OS. Our first implementation of the PMS includes 32 nodes (PMS1). The performance of the PMS1 was tested by Lattice Gauge Theory simulations. Using SU(3) pure gauge theory or bosonic MSSM on the PMS1 computer we obtained 3$/Mflops price-per-sustained performance ratio. The design of the special hardware and the communication driver are freely available upon request for non-profit organizations.

  5. The BlueGene/L Supercomputer

    CERN Document Server

    Bhanot, G V; Gara, A; Vranas, P M; Bhanot, Gyan; Chen, Dong; Gara, Alan; Vranas, Pavlos

    2002-01-01

    The architecture of the BlueGene/L massively parallel supercomputer is described. Each computing node consists of a single compute ASIC plus 256 MB of external memory. The compute ASIC integrates two 700 MHz PowerPC 440 integer CPU cores, two 2.8 Gflops floating point units, 4 MB of embedded DRAM as cache, a memory controller for external memory, six 1.4 Gbit/s bi-directional ports for a 3-dimensional torus network connection, three 2.8 Gbit/s bi-directional ports for connecting to a global tree network and a Gigabit Ethernet for I/O. 65,536 of such nodes are connected into a 3-d torus with a geometry of 32x32x64. The total peak performance of the system is 360 Teraflops and the total amount of memory is 16 TeraBytes.

  6. San Diego supercomputer center reaches data transfer milestone

    CERN Multimedia

    2002-01-01

    The SDSC's huge, updated tape storage system has illustrated its effectiveness by transferring data at 828 megabytes per second making it the fastest data archive system according to program director Phil Andrews (1/2 page).

  7. Content validity of CASA-Q cough domains and UCSD-SOBQ for use in patients with Idiopathic Pulmonary Fibrosis.

    Science.gov (United States)

    Gries, Katharine Suzanne; Esser, Dirk; Wiklund, Ingela

    2013-09-16

    The study objective was to assess the content validity of the Cough and Sputum Assessment Questionnaire (CASA-Q) cough domains and the UCSD Shortness of Breath Questionnaire (SOBQ) for use in patients with Idiopathic Pulmonary Fibrosis (IPF). Cross-sectional, qualitative study with cognitive interviews in patients with IPF. Study outcomes included relevance, comprehension of item meaning, understanding of the instructions, recall period, response options, and concept saturation. Interviews were conducted with 18 IPF patients. The mean age was 68.9 years (SD 11.9), 77.8% were male, and 88.9% were Caucasian. The intended meaning of the CASA-Q cough domain items was clearly understood by most of the participants (89-100%). All participants understood the CASA-Q instructions; the correct recall period was reported by 89% of the patients, and the response options were understood by 76%. The intended meaning of the UCSD-SOBQ items was relevant and clearly understood by all participants. Participants understood the instructions (83%) and all patients understood the response options (100%). The reported recall period varied based on the type of activity performed. No concepts were missing, suggesting that saturation was demonstrated for both measures. This study provides evidence for content validity for the CASA-Q cough domains and the UCSD-SOBQ for patients with IPF. Items of both questionnaires were understood and perceived as relevant to measure the key symptoms of IPF. The results of this study support the use of these instruments in IPF clinical trials as well as further studies of their psychometric properties.

  8. The TianHe-1A Supercomputer: Its Hardware and Software

    Institute of Scientific and Technical Information of China (English)

    Xue-Jun Yang; Xiang-Ke Liao; Kai Lu; Qing-Feng Hu; Jun-Qiang Song; Jin-Shu Su

    2011-01-01

    This paper presents an overview of TianHe-1A (TH-1A) supercomputer, which is built by National University of Defense Technology of China (NUDT). TH-1A adopts a hybrid architecture by integrating CPUs and GPUs, and its interconnect network is a proprietary high-speed communication network. The theoretical peak performance of TH-1A is 4700TFlops, and its LINPACK test result is 2566TFlops. It was ranked the No. 1 on the TOP500 List released in November, 2010. TH-1A is now deployed in National Supercomputer Center in Tianjin and provides high performance computing services. TH-1A has played an important role in many applications, such as oil exploration, weather forecast, bio-medical research.

  9. World's biggest 'virtual supercomputer' given the go-ahead

    CERN Multimedia

    2003-01-01

    "The Particle Physics and Astronomy Research Council has today announced GBP 16 million to create a massive computing Grid, equivalent to the world's second largest supercomputer after Japan's Earth Simulator computer" (1 page).

  10. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Ryabinkin, E.; Wenaus, T.

    2016-02-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed Analysis)Workload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF), is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF's Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  11. Developing and Deploying Advanced Algorithms to Novel Supercomputing Hardware

    CERN Document Server

    Brunner, Robert J; Myers, Adam D

    2007-01-01

    The objective of our research is to demonstrate the practical usage and orders of magnitude speedup of real-world applications by using alternative technologies to support high performance computing. Currently, the main barrier to the widespread adoption of this technology is the lack of development tools and case studies that typically impede non-specialists that might otherwise develop applications that could leverage these technologies. By partnering with the Innovative Systems Laboratory at the National Center for Supercomputing, we have obtained access to several novel technologies, including several Field-Programmable Gate Array (FPGA) systems, NVidia Graphics Processing Units (GPUs), and the STI Cell BE platform. Our goal is to not only demonstrate the capabilities of these systems, but to also serve as guides for others to follow in our path. To date, we have explored the efficacy of the SRC-6 MAP-C and MAP-E and SGI RASC Athena and RC100 reconfigurable computing platforms in supporting a two-point co...

  12. Developing Fortran Code for Kriging on the Stampede Supercomputer

    Science.gov (United States)

    Hodgess, Erin

    2016-04-01

    Kriging is easily accessible in the open source statistical language R (R Core Team, 2015) in the gstat (Pebesma, 2004) package. It works very well, but can be slow on large data sets, particular if the prediction space is large as well. We are working on the Stampede supercomputer at the Texas Advanced Computing Center to develop code using a combination of R and the Message Passage Interface (MPI) bindings to Fortran. We have a function similar to the autofitVariogram found in the automap (Hiemstra {et al}, 2008) package and it is very effective. We are comparing R with MPI/Fortran, MPI/Fortran alone, and R with the Rmpi package, which uses bindings to C. We will present results from simulation studies and real-world examples. References: Hiemstra, P.H., Pebesma, E.J., Twenhofel, C.J.W. and G.B.M. Heuvelink, 2008. Real-time automatic interpolation of ambient gamma dose rates from the Dutch Radioactivity Monitoring Network. Computers and Geosciences, accepted for publication. Pebesma, E.J., 2004. Multivariable geostatistics in S: the gstat package. Computers and Geosciences, 30: 683-691. R Core Team, 2015. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.

  13. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Directory of Open Access Journals (Sweden)

    De K.

    2016-01-01

    Full Text Available The Large Hadron Collider (LHC, operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed AnalysisWorkload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF, is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF’s Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  14. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  15. Taking ASCI supercomputing to the end game.

    Energy Technology Data Exchange (ETDEWEB)

    DeBenedictis, Erik P.

    2004-03-01

    The ASCI supercomputing program is broadly defined as running physics simulations on progressively more powerful digital computers. What happens if we extrapolate the computer technology to its end? We have developed a model for key ASCI computations running on a hypothetical computer whose technology is parameterized in ways that account for advancing technology. This model includes technology information such as Moore's Law for transistor scaling and developments in cooling technology. The model also includes limits imposed by laws of physics, such as thermodynamic limits on power dissipation, limits on cooling, and the limitation of signal propagation velocity to the speed of light. We apply this model and show that ASCI computations will advance smoothly for another 10-20 years to an 'end game' defined by thermodynamic limits and the speed of light. Performance levels at the end game will vary greatly by specific problem, but will be in the Exaflops to Zetaflops range for currently anticipated problems. We have also found an architecture that would be within a constant factor of giving optimal performance at the end game. This architecture is an evolutionary derivative of the mesh-connected microprocessor (such as ASCI Red Storm or IBM Blue Gene/L). We provide designs for the necessary enhancement to microprocessor functionality and the power-efficiency of both the processor and memory system. The technology we develop in the foregoing provides a 'perfect' computer model with which we can rate the quality of realizable computer designs, both in this writing and as a way of designing future computers. This report focuses on classical computers based on irreversible digital logic, and more specifically on algorithms that simulate space computing, irreversible logic, analog computers, and other ways to address stockpile stewardship that are outside the scope of this report.

  16. Simulating functional magnetic materials on supercomputers.

    Science.gov (United States)

    Gruner, Markus Ernst; Entel, Peter

    2009-07-22

    The recent passing of the petaflop per second landmark by the Roadrunner project at the Los Alamos National Laboratory marks a preliminary peak of an impressive world-wide development in the high-performance scientific computing sector. Also, purely academic state-of-the-art supercomputers such as the IBM Blue Gene/P at Forschungszentrum Jülich allow us nowadays to investigate large systems of the order of 10(3) spin polarized transition metal atoms by means of density functional theory. Three applications will be presented where large-scale ab initio calculations contribute to the understanding of key properties emerging from a close interrelation between structure and magnetism. The first two examples discuss the size dependent evolution of equilibrium structural motifs in elementary iron and binary Fe-Pt and Co-Pt transition metal nanoparticles, which are currently discussed as promising candidates for ultra-high-density magnetic data storage media. However, the preference for multiply twinned morphologies at smaller cluster sizes counteracts the formation of a single-crystalline L1(0) phase, which alone provides the required hard magnetic properties. The third application is concerned with the magnetic shape memory effect in the Ni-Mn-Ga Heusler alloy, which is a technologically relevant candidate for magnetomechanical actuators and sensors. In this material strains of up to 10% can be induced by external magnetic fields due to the field induced shifting of martensitic twin boundaries, requiring an extremely high mobility of the martensitic twin boundaries, but also the selection of the appropriate martensitic structure from the rich phase diagram.

  17. Supercomputing - Use Cases, Advances, The Future (1/2)

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the first day, we will focus on the history and theory of supercomputing, the top500 list and the hardware that makes supercomputers tick. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP an...

  18. An integrated distributed processing interface for supercomputers and workstations

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, J.; McGavran, L.

    1989-01-01

    Access to documentation, communication between multiple processes running on heterogeneous computers, and animation of simulations of engineering problems are typically weak in most supercomputer environments. This presentation will describe how we are improving this situation in the Computer Research and Applications group at Los Alamos National Laboratory. We have developed a tool using UNIX filters and a SunView interface that allows users simple access to documentation via mouse driven menus. We have also developed a distributed application that integrated a two point boundary value problem on one of our Cray Supercomputers. It is controlled and displayed graphically by a window interface running on a workstation screen. Our motivation for this research has been to improve the usual typewriter/static interface using language independent controls to show capabilities of the workstation/supercomputer combination. 8 refs.

  19. Supercomputing:HPCMP, Performance Measures and Opportunities

    Science.gov (United States)

    2007-11-02

    28 PEs Redstone Technical Test Center (RTTC) SGI Origin 3900 24 PEs Simulations & Analysis Facility (SIMAF) Beowulf Cluster Linux...HPCMP Systems (MSRCs) HPC Center System Processors Army Research Laboratory (ARL) IBM P3 SGI Origin 3800 IBM P4 Linux Networx Cluster LNX1...Xeon Cluster IBM Opteron Cluster SGI Altix Cluster 1,280 PEs 256 PEs 512 PEs 768 PEs 128 PEs 256 PEs 2,100 PEs 2,372 PEs 256 PEs

  20. Recent results from the Swinburne supercomputer software correlator

    Science.gov (United States)

    Tingay, Steven; et al.

    I will descrcibe the development of software correlators on the Swinburne Beowulf supercomputer and recent work using the Cray XD-1 machine. I will also describe recent Australian and global VLBI experiments that have been processed on the Swinburne software correlator, along with imaging results from these data. The role of the software correlator in Australia's eVLBI project will be discussed.

  1. Access to Supercomputers. Higher Education Panel Report 69.

    Science.gov (United States)

    Holmstrom, Engin Inel

    This survey was conducted to provide the National Science Foundation with baseline information on current computer use in the nation's major research universities, including the actual and potential use of supercomputers. Questionnaires were sent to 207 doctorate-granting institutions; after follow-ups, 167 institutions (91% of the institutions…

  2. The Sky's the Limit When Super Students Meet Supercomputers.

    Science.gov (United States)

    Trotter, Andrew

    1991-01-01

    In a few select high schools in the U.S., supercomputers are allowing talented students to attempt sophisticated research projects using simultaneous simulations of nature, culture, and technology not achievable by ordinary microcomputers. Schools can get their students online by entering contests and seeking grants and partnerships with…

  3. 肺动脉内膜剥脱术治疗慢性栓塞性肺动脉高压——UCSD32例手术经验%Pulmonary endarterectomy for chronic thromboembolic pulmonary hypertension-operative experience in UCSD

    Institute of Scientific and Technical Information of China (English)

    顾松; 刘岩; 苏丕雄; 翟振国; 杨媛华; 王辰; Michael M.Madani; Stuart W.Jamieson

    2010-01-01

    目的 分析肺动脉内膜剥脱术(PEA)治疗慢性栓塞性肺动脉高压的围手术期资料,探讨美国加州大学圣迭戈分校(UCSD)手术经验.方法 回顾性研究UCSD 32例肺动脉血栓内膜剥脱手术资料,其中男17例,女15例;平均年龄(47.56±16.04)岁,平均病程(3.90±4.61)年;15例有深静脉血栓病史.采用全麻、胸骨正中切口、深低温、间断停循环双侧肺动脉内膜剥脱的手术方法.结果 根据术中病理标本Jamieson分型,Ⅰ型占21.8%,Ⅱ型占28.1%,Ⅲ型占37.5%.平均转机(236.32±37.27)min,主动脉阻断(111.69±28.14)min,停循环(38.00±13.58)min.术后机械通气(66.23±99.24)h,住ICU(4.62±4.50)天,无死亡.病人肺动脉收缩压由术前(81.03±16.92)mm Hg(1 mm Hg=0.133 kPa)降至术后(51.20±12.16)mm Hg,肺血管阻力由术前(88.91±42.32)kPa·s·L-1降至术后的(34.38±15.68)kPa·s·L-1,心排量由术前(3.65±1.08)L/min增加到术后(5.85±1.21)L/min,中心静脉压由(13.07±2.11)cmH2O(1 cmH2O=0.098 kPa)降至(9.86±3.02)cmH2O.术后短期随访显示,病人心功能(NYHA)恢复到Ⅰ级19例、Ⅱ级13例,生活质量明显改善.结论 PEA是治疗慢性栓塞性肺动脉高压的重要手段,手术成功率逐年提高;深低温、间断停循环、双侧肺动脉内膜剥脱及内膜外翻技术为PEA标准术式.多中心资料证实该术式可以有效降低肺动脉压和肺血管阻力,明显改善血流动力学指标和心肺功能.多数国内医疗中心没有足够手术经验,应尽量避免选择肺动脉压收缩≥100mmHg,肺血管阻力≥100kPa·s·L-1及Ⅲ型病变者行PEA手术.%Objective Background Pulmonary endarterectomy (PEA) is a safe and effective surgical treatment for chronic thromboembolic pulmonary hypertension. University of California at San Diego Medical Center is widely recognized as the world's leading referral center for PEA surgery with extensive surgical experience, which has surgically treated about 2400 patients till 2009

  4. Scalable parallel programming for high performance seismic simulation on petascale heterogeneous supercomputers

    Science.gov (United States)

    Zhou, Jun

    -ODC code is ready now for real world petascale earthquake simulations. This GPU-based code has demonstrated excellent weak scaling up to the full Titan scale and achieved 2.3 PetaFLOPs sustained computation performance in single precision. The production simulation demonstrated the first 0-10Hz deterministic rough fault simulation. Using the accelerated AWP-ODC, Southern California Earthquake Center (SCEC) has recently created the physics-based probablistic seismic hazard analysis model of the Los Angeles region, CyberShake 14.2, as of the time of the dissertation writing. The tensor-valued wavefield code based on this GPU research has dramatically reduced time-to-solution, making a statewide hazard model a goal reachable with existing heterogeneous supercomputers.

  5. Cyberdyn supercomputer - a tool for imaging geodinamic processes

    Science.gov (United States)

    Pomeran, Mihai; Manea, Vlad; Besutiu, Lucian; Zlagnean, Luminita

    2014-05-01

    More and more physical processes developed within the deep interior of our planet, but with significant impact on the Earth's shape and structure, become subject to numerical modelling by using high performance computing facilities. Nowadays, worldwide an increasing number of research centers decide to make use of such powerful and fast computers for simulating complex phenomena involving fluid dynamics and get deeper insight to intricate problems of Earth's evolution. With the CYBERDYN cybernetic infrastructure (CCI), the Solid Earth Dynamics Department in the Institute of Geodynamics of the Romanian Academy boldly steps into the 21st century by entering the research area of computational geodynamics. The project that made possible this advancement, has been jointly supported by EU and Romanian Government through the Structural and Cohesion Funds. It lasted for about three years, ending October 2013. CCI is basically a modern high performance Beowulf-type supercomputer (HPCC), combined with a high performance visualization cluster (HPVC) and a GeoWall. The infrastructure is mainly structured around 1344 cores and 3 TB of RAM. The high speed interconnect is provided by a Qlogic InfiniBand switch, able to transfer up to 40 Gbps. The CCI storage component is a 40 TB Panasas NAS. The operating system is Linux (CentOS). For control and maintenance, the Bright Cluster Manager package is used. The SGE job scheduler manages the job queues. CCI has been designed for a theoretical peak performance up to 11.2 TFlops. Speed tests showed that a high resolution numerical model (256 × 256 × 128 FEM elements) could be resolved with a mean computational speed of 1 time step at 30 seconds, by employing only a fraction of the computing power (20%). After passing the mandatory tests, the CCI has been involved in numerical modelling of various scenarios related to the East Carpathians tectonic and geodynamic evolution, including the Neogene magmatic activity, and the intriguing

  6. Comparative Validation of Realtime Solar Wind Forecasting Using the UCSD Heliospheric Tomography Model

    Science.gov (United States)

    MacNeice, Peter; Taktakishvili, Alexandra; Jackson, Bernard; Clover, John; Bisi, Mario; Odstrcil, Dusan

    2011-01-01

    The University of California, San Diego 3D Heliospheric Tomography Model reconstructs the evolution of heliospheric structures, and can make forecasts of solar wind density and velocity up to 72 hours in the future. The latest model version, installed and running in realtime at the Community Coordinated Modeling Center(CCMC), analyzes scintillations of meter wavelength radio point sources recorded by the Solar-Terrestrial Environment Laboratory(STELab) together with realtime measurements of solar wind speed and density recorded by the Advanced Composition Explorer(ACE) Solar Wind Electron Proton Alpha Monitor(SWEPAM).The solution is reconstructed using tomographic techniques and a simple kinematic wind model. Since installation, the CCMC has been recording the model forecasts and comparing them with ACE measurements, and with forecasts made using other heliospheric models hosted by the CCMC. We report the preliminary results of this validation work and comparison with alternative models.

  7. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  8. Applications of parallel supercomputers: Scientific results and computer science lessons

    Energy Technology Data Exchange (ETDEWEB)

    Fox, G.C.

    1989-07-12

    Parallel Computing has come of age with several commercial and inhouse systems that deliver supercomputer performance. We illustrate this with several major computations completed or underway at Caltech on hypercubes, transputer arrays and the SIMD Connection Machine CM-2 and AMT DAP. Applications covered are lattice gauge theory, computational fluid dynamics, subatomic string dynamics, statistical and condensed matter physics,theoretical and experimental astronomy, quantum chemistry, plasma physics, grain dynamics, computer chess, graphics ray tracing, and Kalman filters. We use these applications to compare the performance of several advanced architecture computers including the conventional CRAY and ETA-10 supercomputers. We describe which problems are suitable for which computers in the terms of a matching between problem and computer architecture. This is part of a set of lessons we draw for hardware, software, and performance. We speculate on the emergence of new academic disciplines motivated by the growing importance of computers. 138 refs., 23 figs., 10 tabs.

  9. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  10. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  11. Supercomputers ready for use as discovery machines for neuroscience

    OpenAIRE

    Kunkel, Susanne; Schmidt, Maximilian; Helias, Moritz; Eppler, Jochen Martin; Igarashi, Jun; Masumoto, Gen; Fukai, Tomoki; Ishii, Shin; Plesser, Hans Ekkehard; Morrison, Abigail; Diesmann, Markus

    2013-01-01

    NEST is a widely used tool to simulate biological spiking neural networks [1]. The simulator is subject to continuous development, which is driven by the requirements of the current neuroscientific questions. At present, a major part of the software development focuses on the improvement of the simulator's fundamental data structures in order to enable brain-scale simulations on supercomputers such as the Blue Gene system in Jülich and the K computer in Kobe. Based on our memory-u...

  12. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  13. Utility of the UCSD Performance-based Skills Assessment-Brief Japanese version: discriminative ability and relation to neurocognition

    Directory of Open Access Journals (Sweden)

    Chika Sumiyoshi

    2014-09-01

    Full Text Available The UCSD Performance-based Skills Assessment Brief (the UPSA-B has been widely used for evaluating functional capacity in patients with schizophrenia. The utility of the battery in a wide range of cultural contexts has been of concern among developers. The current study investigated the validity of the Japanese version of the UPSA-B as a measure of functional capacity and as a co-primary for neurocognion. Sixty-four Japanese patients with schizophrenia and 83 healthy adults entered the study. The Japanese version of the UPSA-B (UPSA-B Japanese version and the MATRICS Cognitive Consensus Battery Japanese version (MCCB Japanese version were administered. Normal controls performed significantly better than patients, with large effect sizes for the Total and the subscale scores of the UPSA-B. Receiver Operating Characteristic (ROC curve analysis revealed that the optimal cut-off point for the UPSA-B Total score was estimated at around 80. The UPSA-B Total score was significantly correlated with the MCCB Composite score and several domain scores, indicating the relationship between this co-primary measure and overall cognitive functioning in Japanese patients with schizophrenia. The results obtained here suggest that the UPSA-B Japanese version is an effective tool for evaluating disturbances of daily-living skills linked to cognitive functioning in schizophrenia, providing an identifiable cut-off point and relationships to neurocognition. Further research is warranted to evaluate the psychometrical properties and response to treatment of the Japanese version of the UPSA-B.

  14. From Thread to Transcontinental Computer: Disturbing Lessons in Distributed Supercomputing

    CERN Document Server

    Groen, Derek

    2015-01-01

    We describe the political and technical complications encountered during the astronomical CosmoGrid project. CosmoGrid is a numerical study on the formation of large scale structure in the universe. The simulations are challenging due to the enormous dynamic range in spatial and temporal coordinates, as well as the enormous computer resources required. In CosmoGrid we dealt with the computational requirements by connecting up to four supercomputers via an optical network and make them operate as a single machine. This was challenging, if only for the fact that the supercomputers of our choice are separated by half the planet, as three of them are located scattered across Europe and fourth one is in Tokyo. The co-scheduling of multiple computers and the 'gridification' of the code enabled us to achieve an efficiency of up to $93\\%$ for this distributed intercontinental supercomputer. In this work, we find that high-performance computing on a grid can be done much more effectively if the sites involved are will...

  15. Proceedings of the first energy research power supercomputer users symposium

    Energy Technology Data Exchange (ETDEWEB)

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University.

  16. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy; Katz, Daniel S.; Binkowski, T. Andrew; Zhong, Xiaoliang; Heinonen, Olle; Karpeyev, Dmitry; Wilde, Michael

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt's sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.

  17. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  18. Programming Environment for a High-Performance Parallel Supercomputer with Intelligent Communication

    OpenAIRE

    A. Gunzinger; BÄumle, B.; Frey, M.; Klebl, M.; Kocheisen, M.; Kohler, P.; Morel, R.; Müller, U; Rosenthal, M

    1996-01-01

    At the Electronics Laboratory of the Swiss Federal Institute of Technology (ETH) in Zürich, the high-performance parallel supercomputer MUSIC (MUlti processor System with Intelligent Communication) has been developed. As applications like neural network simulation and molecular dynamics show, the Electronics Laboratory supercomputer is absolutely on par with those of conventional supercomputers, but electric power requirements are reduced by a factor of 1,000, weight is reduced by a factor of...

  19. Numerical simulations of astrophysical problems on massively parallel supercomputers

    Science.gov (United States)

    Kulikov, Igor; Chernykh, Igor; Glinsky, Boris

    2016-10-01

    In this paper, we propose the last version of the numerical model for simulation of astrophysical objects dynamics, and a new realization of our AstroPhi code for Intel Xeon Phi based RSC PetaStream supercomputers. The co-design of a computational model for the description of astrophysical objects is described. The parallel implementation and scalability tests of the AstroPhi code are presented. We achieve a 73% weak scaling efficiency with using of 256x Intel Xeon Phi accelerators with 61440 threads.

  20. AENEAS A Custom-built Parallel Supercomputer for Quantum Gravity

    CERN Document Server

    Hamber, H W

    1998-01-01

    Accurate Quantum Gravity calculations, based on the simplicial lattice formulation, are computationally very demanding and require vast amounts of computer resources. A custom-made 64-node parallel supercomputer capable of performing up to $2 \\times 10^{10}$ floating point operations per second has been assembled entirely out of commodity components, and has been operational for the last ten months. It will allow the numerical computation of a variety of quantities of physical interest in quantum gravity and related field theories, including the estimate of the critical exponents in the vicinity of the ultraviolet fixed point to an accuracy of a few percent.

  1. A special purpose silicon compiler for designing supercomputing VLSI systems

    Science.gov (United States)

    Venkateswaran, N.; Murugavel, P.; Kamakoti, V.; Shankarraman, M. J.; Rangarajan, S.; Mallikarjun, M.; Karthikeyan, B.; Prabhakar, T. S.; Satish, V.; Venkatasubramaniam, P. R.

    1991-01-01

    Design of general/special purpose supercomputing VLSI systems for numeric algorithm execution involves tackling two important aspects, namely their computational and communication complexities. Development of software tools for designing such systems itself becomes complex. Hence a novel design methodology has to be developed. For designing such complex systems a special purpose silicon compiler is needed in which: the computational and communicational structures of different numeric algorithms should be taken into account to simplify the silicon compiler design, the approach is macrocell based, and the software tools at different levels (algorithm down to the VLSI circuit layout) should get integrated. In this paper a special purpose silicon (SPS) compiler based on PACUBE macrocell VLSI arrays for designing supercomputing VLSI systems is presented. It is shown that turn-around time and silicon real estate get reduced over the silicon compilers based on PLA's, SLA's, and gate arrays. The first two silicon compiler characteristics mentioned above enable the SPS compiler to perform systolic mapping (at the macrocell level) of algorithms whose computational structures are of GIPOP (generalized inner product outer product) form. Direct systolic mapping on PLA's, SLA's, and gate arrays is very difficult as they are micro-cell based. A novel GIPOP processor is under development using this special purpose silicon compiler.

  2. Solidification in a Supercomputer: From Crystal Nuclei to Dendrite Assemblages

    Science.gov (United States)

    Shibuta, Yasushi; Ohno, Munekazu; Takaki, Tomohiro

    2015-08-01

    Thanks to the recent progress in high-performance computational environments, the range of applications of computational metallurgy is expanding rapidly. In this paper, cutting-edge simulations of solidification from atomic to microstructural levels performed on a graphics processing unit (GPU) architecture are introduced with a brief introduction to advances in computational studies on solidification. In particular, million-atom molecular dynamics simulations captured the spontaneous evolution of anisotropy in a solid nucleus in an undercooled melt and homogeneous nucleation without any inducing factor, which is followed by grain growth. At the microstructural level, the quantitative phase-field model has been gaining importance as a powerful tool for predicting solidification microstructures. In this paper, the convergence behavior of simulation results obtained with this model is discussed, in detail. Such convergence ensures the reliability of results of phase-field simulations. Using the quantitative phase-field model, the competitive growth of dendrite assemblages during the directional solidification of a binary alloy bicrystal at the millimeter scale is examined by performing two- and three-dimensional large-scale simulations by multi-GPU computation on the supercomputer, TSUBAME2.5. This cutting-edge approach using a GPU supercomputer is opening a new phase in computational metallurgy.

  3. The TeraGyroid Experiment – Supercomputing 2003

    Directory of Open Access Journals (Sweden)

    R.J. Blake

    2005-01-01

    Full Text Available Amphiphiles are molecules with hydrophobic tails and hydrophilic heads. When dispersed in solvents, they self assemble into complex mesophases including the beautiful cubic gyroid phase. The goal of the TeraGyroid experiment was to study defect pathways and dynamics in these gyroids. The UK's supercomputing and USA's TeraGrid facilities were coupled together, through a dedicated high-speed network, into a single computational Grid for research work that peaked around the Supercomputing 2003 conference. The gyroids were modeled using lattice Boltzmann methods with parameter spaces explored using many 1283 and 3grid point simulations, this data being used to inform the world's largest three-dimensional time dependent simulation with 10243-grid points. The experiment generated some 2 TBytes of useful data. In terms of Grid technology the project demonstrated the migration of simulations (using Globus middleware to and fro across the Atlantic exploiting the availability of resources. Integration of the systems accelerated the time to insight. Distributed visualisation of the output datasets enabled the parameter space of the interactions within the complex fluid to be explored from a number of sites, informed by discourse over the Access Grid. The project was sponsored by EPSRC (UK and NSF (USA with trans-Atlantic optical bandwidth provided by British Telecommunications.

  4. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [ORNL; New, Joshua Ryan [ORNL; Edwards, Richard [ORNL; Parker, Lynne Edwards [ORNL

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.

  5. Optimizing Linpack Benchmark on GPU-Accelerated Petascale Supercomputer

    Institute of Scientific and Technical Information of China (English)

    Feng Wang; Can-Qun Yang; Yun-Fei Du; Juan Chen; Hui-Zhan Yi; Wei-Xia Xu

    2011-01-01

    In this paper we present the programming of the Linpack benchmark on TianHe-1 system,the first petascale supercomputer system of China,and the largest GPU-accelerated heterogeneous system ever attempted before.A hybrid programming model consisting of MPI,OpenMP and streaming computing is described to explore the task parallel,thread parallel and data parallel of the Linpack.We explain how we optimized the load distribution across the CPUs and GPUs using the two-level adaptive method and describe the implementation in details.To overcome the low-bandwidth between the CPU and GPU communication,we present a software pipelining technique to hide the communication overhead.Combined with other traditional optimizations,the Linpack we developed achieved 196.7 GFLOPS on a single compute element of TianHe-1.This result is 70.1% of the peak compute capability,3.3 times faster than the result by using the vendor's library.On the full configuration of TianHe-1 our optimizations resulted in a Linpack performance of 0.563 PFLOPS,which made TianHe-1 the 5th fastest supercomputer on the Top500 list in November,2009.

  6. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    Barreiro Megino, Fernando Harald; The ATLAS collaboration

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS ex- periment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercom- puter at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA Pilot framework for ...

  7. Lectures in Supercomputational Neurosciences Dynamics in Complex Brain Networks

    CERN Document Server

    Graben, Peter beim; Thiel, Marco; Kurths, Jürgen

    2008-01-01

    Computational Neuroscience is a burgeoning field of research where only the combined effort of neuroscientists, biologists, psychologists, physicists, mathematicians, computer scientists, engineers and other specialists, e.g. from linguistics and medicine, seem to be able to expand the limits of our knowledge. The present volume is an introduction, largely from the physicists' perspective, to the subject matter with in-depth contributions by system neuroscientists. A conceptual model for complex networks of neurons is introduced that incorporates many important features of the real brain, such as various types of neurons, various brain areas, inhibitory and excitatory coupling and the plasticity of the network. The computational implementation on supercomputers, which is introduced and discussed in detail in this book, will enable the readers to modify and adapt the algortihm for their own research. Worked-out examples of applications are presented for networks of Morris-Lecar neurons to model the cortical co...

  8. Modeling the weather with a data flow supercomputer

    Science.gov (United States)

    Dennis, J. B.; Gao, G.-R.; Todd, K. W.

    1984-01-01

    A static concept of data flow architecture is considered for a supercomputer for weather modeling. The machine level instructions are loaded into specific memory locations before computation is initiated, with only one instruction active at a time. The machine would have processing element, functional unit, array memory, memory routing and distribution routing network elements all contained on microprocessors. A value-oriented algorithmic language (VAL) would be employed and would have, as basic operations, simple functions deriving results from operand values. Details of the machine language format, computations with an array and file processing procedures are outlined. A global weather model is discussed in terms of a static architecture and the potential computation rate is analyzed. The results indicate that detailed design studies are warranted to quantify costs and parts fabrication requirements.

  9. Toward the Graphics Turing Scale on a Blue Gene Supercomputer

    CERN Document Server

    McGuigan, Michael

    2008-01-01

    We investigate raytracing performance that can be achieved on a class of Blue Gene supercomputers. We measure a 822 times speedup over a Pentium IV on a 6144 processor Blue Gene/L. We measure the computational performance as a function of number of processors and problem size to determine the scaling performance of the raytracing calculation on the Blue Gene. We find nontrivial scaling behavior at large number of processors. We discuss applications of this technology to scientific visualization with advanced lighting and high resolution. We utilize three racks of a Blue Gene/L in our calculations which is less than three percent of the the capacity of the worlds largest Blue Gene computer.

  10. Direct numerical simulation of turbulence using GPU accelerated supercomputers

    Science.gov (United States)

    Khajeh-Saeed, Ali; Blair Perot, J.

    2013-02-01

    Direct numerical simulations of turbulence are optimized for up to 192 graphics processors. The results from two large GPU clusters are compared to the performance of corresponding CPU clusters. A number of important algorithm changes are necessary to access the full computational power of graphics processors and these adaptations are discussed. It is shown that the handling of subdomain communication becomes even more critical when using GPU based supercomputers. The potential for overlap of MPI communication with GPU computation is analyzed and then optimized. Detailed timings reveal that the internal calculations are now so efficient that the operations related to MPI communication are the primary scaling bottleneck at all but the very largest problem sizes that can fit on the hardware. This work gives a glimpse of the CFD performance issues will dominate many hardware platform in the near future.

  11. Solving global shallow water equations on heterogeneous supercomputers.

    Science.gov (United States)

    Fu, Haohuan; Gan, Lin; Yang, Chao; Xue, Wei; Wang, Lanning; Wang, Xinliang; Huang, Xiaomeng; Yang, Guangwen

    2017-01-01

    The scientific demand for more accurate modeling of the climate system calls for more computing power to support higher resolutions, inclusion of more component models, more complicated physics schemes, and larger ensembles. As the recent improvements in computing power mostly come from the increasing number of nodes in a system and the integration of heterogeneous accelerators, how to scale the computing problems onto more nodes and various kinds of accelerators has become a challenge for the model development. This paper describes our efforts on developing a highly scalable framework for performing global atmospheric modeling on heterogeneous supercomputers equipped with various accelerators, such as GPU (Graphic Processing Unit), MIC (Many Integrated Core), and FPGA (Field Programmable Gate Arrays) cards. We propose a generalized partition scheme of the problem domain, so as to keep a balanced utilization of both CPU resources and accelerator resources. With optimizations on both computing and memory access patterns, we manage to achieve around 8 to 20 times speedup when comparing one hybrid GPU or MIC node with one CPU node with 12 cores. Using a customized FPGA-based data-flow engines, we see the potential to gain another 5 to 8 times improvement on performance. On heterogeneous supercomputers, such as Tianhe-1A and Tianhe-2, our framework is capable of achieving ideally linear scaling efficiency, and sustained double-precision performances of 581 Tflops on Tianhe-1A (using 3750 nodes) and 3.74 Pflops on Tianhe-2 (using 8644 nodes). Our study also provides an evaluation on the programming paradigm of various accelerator architectures (GPU, MIC, FPGA) for performing global atmospheric simulation, to form a picture about both the potential performance benefits and the programming efforts involved.

  12. Virtualizing Super-Computation On-Board Uas

    Science.gov (United States)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  13. Non-preconditioned conjugate gradient on cell and FPCA-based hybrid supercomputer nodes

    Energy Technology Data Exchange (ETDEWEB)

    Dubois, David H [Los Alamos National Laboratory; Dubois, Andrew J [Los Alamos National Laboratory; Boorman, Thomas M [Los Alamos National Laboratory; Connor, Carolyn M [Los Alamos National Laboratory

    2009-03-10

    This work presents a detailed implementation of a double precision, Non-Preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{trademark} in conjunction with x86 Opteron{trademark} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  14. Non-preconditioned conjugate gradient on cell and FPGA based hybrid supercomputer nodes

    Energy Technology Data Exchange (ETDEWEB)

    Dubois, David H [Los Alamos National Laboratory; Dubois, Andrew J [Los Alamos National Laboratory; Boorman, Thomas M [Los Alamos National Laboratory; Connor, Carolyn M [Los Alamos National Laboratory

    2009-01-01

    This work presents a detailed implementation of a double precision, non-preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{sup TM} in conjunction with x86 Opteron{sup TM} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  15. Programming Environment for a High-Performance Parallel Supercomputer with Intelligent Communication

    Directory of Open Access Journals (Sweden)

    A. Gunzinger

    1996-01-01

    Full Text Available At the Electronics Laboratory of the Swiss Federal Institute of Technology (ETH in Zürich, the high-performance parallel supercomputer MUSIC (MUlti processor System with Intelligent Communication has been developed. As applications like neural network simulation and molecular dynamics show, the Electronics Laboratory supercomputer is absolutely on par with those of conventional supercomputers, but electric power requirements are reduced by a factor of 1,000, weight is reduced by a factor of 400, and price is reduced by a factor of 100. Software development is a key issue of such parallel systems. This article focuses on the programming environment of the MUSIC system and on its applications.

  16. Requirements for supercomputing in energy research: The transition to massively parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    1993-02-01

    This report discusses: The emergence of a practical path to TeraFlop computing and beyond; requirements of energy research programs at DOE; implementation: supercomputer production computing environment on massively parallel computers; and implementation: user transition to massively parallel computing.

  17. Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Supercomputing plays a major role in many areas of science and engineering, and it has had tremendous impact for decades in areas such as aerospace, defense, energy,...

  18. SUPERCOMPUTERS FOR AIDING ECONOMIC PROCESSES WITH REFERENCE TO THE FINANCIAL SECTOR

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2014-12-01

    Full Text Available The article discusses the use of supercomputers to support business processes with particular emphasis on the financial sector. A reference was made to the selected projects that support economic development. In particular, we propose the use of supercomputers to perform artificial intel-ligence methods in banking. The proposed methods combined with modern technology enables a significant increase in the competitiveness of enterprises and banks by adding new functionality.

  19. A novel VLSI processor architecture for supercomputing arrays

    Science.gov (United States)

    Venkateswaran, N.; Pattabiraman, S.; Devanathan, R.; Ahmed, Ashaf; Venkataraman, S.; Ganesh, N.

    1993-01-01

    Design of the processor element for general purpose massively parallel supercomputing arrays is highly complex and cost ineffective. To overcome this, the architecture and organization of the functional units of the processor element should be such as to suit the diverse computational structures and simplify mapping of complex communication structures of different classes of algorithms. This demands that the computation and communication structures of different class of algorithms be unified. While unifying the different communication structures is a difficult process, analysis of a wide class of algorithms reveals that their computation structures can be expressed in terms of basic IP,IP,OP,CM,R,SM, and MAA operations. The execution of these operations is unified on the PAcube macro-cell array. Based on this PAcube macro-cell array, we present a novel processor element called the GIPOP processor, which has dedicated functional units to perform the above operations. The architecture and organization of these functional units are such to satisfy the two important criteria mentioned above. The structure of the macro-cell and the unification process has led to a very regular and simpler design of the GIPOP processor. The production cost of the GIPOP processor is drastically reduced as it is designed on high performance mask programmable PAcube arrays.

  20. Numerical infinities and infinitesimals in a new supercomputing framework

    Science.gov (United States)

    Sergeyev, Yaroslav D.

    2016-06-01

    Traditional computers are able to work numerically with finite numbers only. The Infinity Computer patented recently in USA and EU gets over this limitation. In fact, it is a computational device of a new kind able to work numerically not only with finite quantities but with infinities and infinitesimals, as well. The new supercomputing methodology is not related to non-standard analysis and does not use either Cantor's infinite cardinals or ordinals. It is founded on Euclid's Common Notion 5 saying `The whole is greater than the part'. This postulate is applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). It is shown that it becomes possible to write down finite, infinite, and infinitesimal numbers by a finite number of symbols as numerals belonging to a positional numeral system with an infinite radix described by a specific ad hoc introduced axiom. Numerous examples of the usage of the introduced computational tools are given during the lecture. In particular, algorithms for solving optimization problems and ODEs are considered among the computational applications of the Infinity Computer. Numerical experiments executed on a software prototype of the Infinity Computer are discussed.

  1. Micro-mechanical Simulations of Soils using Massively Parallel Supercomputers

    Directory of Open Access Journals (Sweden)

    David W. Washington

    2004-06-01

    Full Text Available In this research a computer program, Trubal version 1.51, based on the Discrete Element Method was converted to run on a Connection Machine (CM-5,a massively parallel supercomputer with 512 nodes, to expedite the computational times of simulating Geotechnical boundary value problems. The dynamic memory algorithm in Trubal program did not perform efficiently in CM-2 machine with the Single Instruction Multiple Data (SIMD architecture. This was due to the communication overhead involving global array reductions, global array broadcast and random data movement. Therefore, a dynamic memory algorithm in Trubal program was converted to a static memory arrangement and Trubal program was successfully converted to run on CM-5 machines. The converted program was called "TRUBAL for Parallel Machines (TPM." Simulating two physical triaxial experiments and comparing simulation results with Trubal simulations validated the TPM program. With a 512 nodes CM-5 machine TPM produced a nine-fold speedup demonstrating the inherent parallelism within algorithms based on the Discrete Element Method.

  2. Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters

    Science.gov (United States)

    Fluke, Christopher J.; Barnes, David G.; Barsdell, Benjamin R.; Hassan, Amr H.

    2011-01-01

    General-purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplifying the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.

  3. Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters

    CERN Document Server

    Fluke, Christopher J; Barsdell, Benjamin R; Hassan, Amr H

    2010-01-01

    General purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplyfing the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best-practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks, and make the investment of time and effort to become early adopters of GPGPU in astronomy, s...

  4. Using the multistage cube network topology in parallel supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Siegel, H.J.; Nation, W.G. (Purdue Univ., Lafayette, IN (USA). School of Electrical Engineering); Kruskal, C.P. (Maryland Univ., College Park, MD (USA). Dept. of Computer Science); Napolitano, L.M. Jr. (Sandia National Labs., Livermore, CA (USA))

    1989-12-01

    A variety of approaches to designing the interconnection network to support communications among the processors and memories of supercomputers employing large-scale parallel processing have been proposed and/or implemented. These approaches are often based on the multistage cube topology. This topology is the subject of much ongoing research and study because of the ways in which the multistage cube can be used. The attributes of the topology that make it useful are described. These include O(N log{sub 2} N) cost for an N input/output network, decentralized control, a variety of implementation options, good data permuting capability to support single instruction stream/multiple data stream (SIMD) parallelism, good throughput to support multiple instruction stream/multiple data stream (MIMD) parallelism, and ability to be partitioned into independent subnetworks to support reconfigurable systems. Examples of existing systems that use multistage cube networks are overviewed. The multistage cube topology can be converted into a single-stage network by associating with each switch in the network a processor (and a memory). Properties of systems that use the multistage cube network in this way are also examined.

  5. Supercomputers ready for use as discovery machines for neuroscience

    Directory of Open Access Journals (Sweden)

    Moritz eHelias

    2012-11-01

    Full Text Available NEST is a widely used tool to simulate biological spiking neural networks. Here we explain theimprovements, guided by a mathematical model of memory consumption, that enable us to exploitfor the first time the computational power of the K supercomputer for neuroscience. Multi-threadedcomponents for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling.K is capable of simulating networks corresponding to a brain area with 10^8 neurons and 10^12 synapsesin the worst case scenario of random connectivity; for larger networks of the brain its hierarchicalorganization can be exploited to constrain the number of communicating computer nodes. Wediscuss the limits of the software technology, comparing maximum-□lling scaling plots for K andthe JUGENE BG/P system. The usability of these machines for network simulations has becomecomparable to running simulations on a single PC. Turn-around times in the range of minutes evenfor the largest systems enable a quasi-interactive working style and render simulations on this scalea practical tool for computational neuroscience.

  6. Supercomputers ready for use as discovery machines for neuroscience.

    Science.gov (United States)

    Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus

    2012-01-01

    NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 10(8) neurons and 10(12) synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience.

  7. Economics of data center optics

    Science.gov (United States)

    Huff, Lisa

    2016-03-01

    Traffic to and from data centers is now reaching Zettabytes/year. Even the smallest of businesses now rely on data centers for revenue generation. And, the largest data centers today are orders of magnitude larger than the supercomputing centers of a few years ago. Until quite recently, for most data center managers, optical data centers were nice to dream about, but not really essential. Today, the all-optical data center - perhaps even an all-single mode fiber (SMF) data center is something that even managers of medium-sized data centers should be considering. Economical transceivers are the key to increased adoption of data center optics. An analysis of current and near future data center optics economics will be discussed in this paper.

  8. Holistic Approach to Data Center Energy Efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Steven W [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-18

    This presentation discusses NREL's Energy System Integrations Facility and NREL's holistic design approach to sustainable data centers that led to the world's most energy-efficient data center. It describes Peregrine, a warm water liquid cooled supercomputer, waste heat reuse in the data center, demonstrated PUE and ERE, and lessons learned during four years of operation.

  9. The TESS science processing operations center

    Science.gov (United States)

    Jenkins, Jon M.; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd; Smith, Jeffrey C.; Caldwell, Douglas A.; Chacon, A. D.; Henze, Christopher; Heiges, Cory; Latham, David W.; Morgan, Edward; Swade, Daryl; Rinehart, Stephen; Vanderspek, Roland

    2016-08-01

    The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover 1,000 small planets with Rp SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).

  10. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to

  11. Data mining method for anomaly detection in the supercomputer task flow

    Science.gov (United States)

    Voevodin, Vadim; Voevodin, Vladimir; Shaikhislamov, Denis; Nikitenko, Dmitry

    2016-10-01

    The efficiency of most supercomputer applications is extremely low. At the same time, the user rarely even suspects that their applications may be wasting computing resources. Software tools need to be developed to help detect inefficient applications and report them to the users. We suggest an algorithm for detecting anomalies in the supercomputer's task flow, based on a data mining methods. System monitoring is used to calculate integral characteristics for every job executed, and the data is used as input for our classification method based on the Random Forest algorithm. The proposed approach can currently classify the application as one of three classes - normal, suspicious and definitely anomalous. The proposed approach has been demonstrated on actual applications running on the "Lomonosov" supercomputer.

  12. Guide to dataflow supercomputing basic concepts, case studies, and a detailed example

    CERN Document Server

    Milutinovic, Veljko; Trifunovic, Nemanja; Giorgi, Roberto

    2015-01-01

    This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; revie

  13. Inclusion of In-Situ Velocity Measurements into the UCSD Time-Dependent Tomography to Constrain and Better-Forecast Remote-Sensing Observations

    Science.gov (United States)

    Jackson, B. V.; Hick, P. P.; Bisi, M. M.; Clover, J. M.; Buffington, A.

    2010-08-01

    The University of California, San Diego (UCSD) three-dimensional (3-D) time-dependent tomography program has been used successfully for a decade to reconstruct and forecast coronal mass ejections from interplanetary scintillation observations. More recently, we have extended this tomography technique to use remote-sensing data from the Solar Mass Ejection Imager (SMEI) on board the Coriolis spacecraft; from the Ootacamund (Ooty) radio telescope in India; and from the European Incoherent SCATter (EISCAT) radar telescopes in northern Scandinavia. Finally, we intend these analyses to be used with observations from the Murchison Widefield Array (MWA), or the LOw Frequency ARray (LOFAR) now being developed respectively in Australia and Europe. In this article we demonstrate how in-situ velocity measurements from the Advanced Composition Explorer (ACE) space-borne instrumentation can be used in addition to remote-sensing data to constrain the time-dependent tomographic solution. Supplementing the remote-sensing observations with in-situ measurements provides additional information to construct an iterated solar-wind parameter that is propagated outward from near the solar surface past the measurement location, and throughout the volume. While the largest changes within the volume are close to the radial directions that incorporate the in-situ measurements, their inclusion significantly reduces the uncertainty in extending these measurements to global 3-D reconstructions that are distant in time and space from the spacecraft. At Earth, this can provide a finely-tuned real-time measurement up to the latest time for which in-situ measurements are available, and enables more-accurate forecasting beyond this than remote-sensing observations alone allow.

  14. Supercomputing '91; Proceedings of the 4th Annual Conference on High Performance Computing, Albuquerque, NM, Nov. 18-22, 1991

    Science.gov (United States)

    1991-01-01

    Various papers on supercomputing are presented. The general topics addressed include: program analysis/data dependence, memory access, distributed memory code generation, numerical algorithms, supercomputer benchmarks, latency tolerance, parallel programming, applications, processor design, networks, performance tools, mapping and scheduling, characterization affecting performance, parallelism packaging, computing climate change, combinatorial algorithms, hardware and software performance issues, system issues. (No individual items are abstracted in this volume)

  15. High-Performance Computing: Industry Uses of Supercomputers and High-Speed Networks. Report to Congressional Requesters.

    Science.gov (United States)

    General Accounting Office, Washington, DC. Information Management and Technology Div.

    This report was prepared in response to a request for information on supercomputers and high-speed networks from the Senate Committee on Commerce, Science, and Transportation, and the House Committee on Science, Space, and Technology. The following information was requested: (1) examples of how various industries are using supercomputers to…

  16. Argonne National Lab deploys Force10 networks' massively dense ethernet switch for supercomputing cluster

    CERN Multimedia

    2003-01-01

    "Force10 Networks, Inc. today announced that Argonne National Laboratory (Argonne, IL) has successfully deployed Force10 E-Series switch/routers to connect to the TeraGrid, the world's largest supercomputing grid, sponsored by the National Science Foundation (NSF)" (1/2 page).

  17. Design and performance characterization of electronic structure calculations on massively parallel supercomputers

    DEFF Research Database (Denmark)

    Romero, N. A.; Glinsvad, Christian; Larsen, Ask Hjorth

    2013-01-01

    Density function theory (DFT) is the most widely employed electronic structure method because of its favorable scaling with system size and accuracy for a broad range of molecular and condensed-phase systems. The advent of massively parallel supercomputers has enhanced the scientific community's ...

  18. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  19. The impact of the U.S. supercomputing initiative will be global

    Energy Technology Data Exchange (ETDEWEB)

    Crawford, Dona [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-15

    Last July, President Obama issued an executive order that created a coordinated federal strategy for HPC research, development, and deployment called the U.S. National Strategic Computing Initiative (NSCI). However, this bold, necessary step toward building the next generation of supercomputers has inaugurated a new era for U.S. high performance computing (HPC).

  20. Congressional Panel Seeks To Curb Access of Foreign Students to U.S. Supercomputers.

    Science.gov (United States)

    Kiernan, Vincent

    1999-01-01

    Fearing security problems, a congressional committee on Chinese espionage recommends that foreign students and other foreign nationals be barred from using supercomputers at national laboratories unless they first obtain export licenses from the federal government. University officials dispute the data on which the report is based and find the…

  1. [Experience in simulating the structural and dynamic features of small proteins using table supercomputers].

    Science.gov (United States)

    Kondrat'ev, M S; Kabanov, A V; Komarov, V M; Khechinashvili, N N; Samchenko, A A

    2011-01-01

    The results of theoretical studies of the structural and dynamic features of peptides and small proteins have been presented that were carried out by quantum chemical and molecular dynamics methods in high-performance graphic stations, "table supercomputers", using distributed calculations by the CUDA technology.

  2. Purdue, UCSD achieve networking milestones

    CERN Document Server

    Kapp, Jennifer

    2006-01-01

    "Purdue University researchers have reached new milestones in Grid interoperability through the successful integration of two Open Science Grid (OSG) sites running a scientific application over the National Science Foundation TeraGrid network." (1,5 page)

  3. Integration Of PanDA Workload Management System With Supercomputers

    CERN Document Server

    Klimentov, Alexei; The ATLAS collaboration; Maeno, Tadashi; Mashinistov, Ruslan; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Read, Kenneth; Ryabinkin, Evgeny; Wenaus, Torre

    2015-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 100,000 co...

  4. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  5. Interactive steering of supercomputing simulation for aerodynamic noise radiated from square cylinder; Supercomputer wo mochiita steering system ni yoru kakuchu kara hoshasareru kurikion no suchi kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Yokono, Y. [Toshiba Corp., Tokyo (Japan); Fujita, H. [Tokyo Inst. of Technology, Tokyo (Japan). Precision Engineering Lab.

    1995-03-25

    This paper describes extensive computer simulation for aerodynamic noise radiated from a square cylinder using an interactive steering supercomputing simulation system. The unsteady incompressible three-dimensional Navier-Stokes equations are solved by the finite volume method using a steering system which can visualize the numerical process during calculation and alter the numerical parameter. Using the fluctuating surface pressure of the square cylinder, the farfield sound pressure is calculated based on Lighthill-Curle`s equation. The results are compared with those of low noise wind tunnel experiments, and good agreement is observed for the peak spectrum frequency of the sound pressure level. 14 refs., 10 figs.

  6. MPI/OpenMP Hybrid Parallel Algorithm of Resolution of Identity Second-Order Møller-Plesset Perturbation Calculation for Massively Parallel Multicore Supercomputers.

    Science.gov (United States)

    Katouda, Michio; Nakajima, Takahito

    2013-12-10

    A new algorithm for massively parallel calculations of electron correlation energy of large molecules based on the resolution of identity second-order Møller-Plesset perturbation (RI-MP2) technique is developed and implemented into the quantum chemistry software NTChem. In this algorithm, a Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) hybrid parallel programming model is applied to attain efficient parallel performance on massively parallel supercomputers. An in-core storage scheme of intermediate data of three-center electron repulsion integrals utilizing the distributed memory is developed to eliminate input/output (I/O) overhead. The parallel performance of the algorithm is tested on massively parallel supercomputers such as the K computer (using up to 45 992 central processing unit (CPU) cores) and a commodity Intel Xeon cluster (using up to 8192 CPU cores). The parallel RI-MP2/cc-pVTZ calculation of two-layer nanographene sheets (C150H30)2 (number of atomic orbitals is 9640) is performed using 8991 node and 71 288 CPU cores of the K computer.

  7. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Mashinistov, Ruslan; Belyaev, Nikita; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at high occupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualisation tools and more. WLCG...

  8. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Belyaev, Nikita; Mashinistov, Ruslan; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at highoccupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualization tools and more. WLCG ...

  9. Internal fluid mechanics research on supercomputers for aerospace propulsion systems

    Science.gov (United States)

    Miller, Brent A.; Anderson, Bernhard H.; Szuch, John R.

    1988-01-01

    The Internal Fluid Mechanics Division of the NASA Lewis Research Center is combining the key elements of computational fluid dynamics, aerothermodynamic experiments, and advanced computational technology to bring internal computational fluid mechanics (ICFM) to a state of practical application for aerospace propulsion systems. The strategies used to achieve this goal are to: (1) pursue an understanding of flow physics, surface heat transfer, and combustion via analysis and fundamental experiments, (2) incorporate improved understanding of these phenomena into verified 3-D CFD codes, and (3) utilize state-of-the-art computational technology to enhance experimental and CFD research. Presented is an overview of the ICFM program in high-speed propulsion, including work in inlets, turbomachinery, and chemical reacting flows. Ongoing efforts to integrate new computer technologies, such as parallel computing and artificial intelligence, into high-speed aeropropulsion research are described.

  10. Explaining the Gap between Theoretical Peak Performance and Real Performance for Supercomputer Architectures

    Directory of Open Access Journals (Sweden)

    W. Schönauer

    1994-01-01

    Full Text Available The basic architectures of vector and parallel computers and their properties are presented followed by a discussion of memory size and arithmetic operations in the context of memory bandwidth. For a single operation micromeasurements of the vector triad for the IBM 3090 VF and the CRAY Y-MP/8 are presented, revealing in detail the losses for this operation. The global performance of a whole supercomputer is then considered by identifying reduction factors that reduce the theoretical peak performance to the poor real performance. The responsibilities of the manufacturer and of the user for these losses are discussed. The price-performance ratio for different architectures as of January 1991 is briefly mentioned. Finally a user-friendly architecture for a supercomputer is proposed.

  11. HACC: Simulating Sky Surveys on State-of-the-Art Supercomputing Architectures

    CERN Document Server

    Habib, Salman; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukic, Zarija; Sehrish, Saba; Liao, Wei-keng

    2014-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of prog...

  12. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    Science.gov (United States)

    Cabrillo, I.; Cabellos, L.; Marco, J.; Fernandez, J.; Gonzalez, I.

    2014-06-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  13. Sandia`s network for supercomputing `95: Validating the progress of Asynchronous Transfer Mode (ATM) switching

    Energy Technology Data Exchange (ETDEWEB)

    Pratt, T.J.; Vahle, O.; Gossage, S.A.

    1996-04-01

    The Advanced Networking Integration Department at Sandia National Laboratories has used the annual Supercomputing conference sponsored by the IEEE and ACM for the past three years as a forum to demonstrate and focus communication and networking developments. For Supercomputing `95, Sandia elected: to demonstrate the functionality and capability of an AT&T Globeview 20Gbps Asynchronous Transfer Mode (ATM) switch, which represents the core of Sandia`s corporate network, to build and utilize a three node 622 megabit per second Paragon network, and to extend the DOD`s ACTS ATM Internet from Sandia, New Mexico to the conference`s show floor in San Diego, California, for video demonstrations. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations supports Sandia`s overall strategies in ATM networking.

  14. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  15. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  16. Towards 21st century stellar models: Star clusters, supercomputing and asteroseismology

    Science.gov (United States)

    Campbell, S. W.; Constantino, T. N.; D'Orazi, V.; Meakin, C.; Stello, D.; Christensen-Dalsgaard, J.; Kuehn, C.; De Silva, G. M.; Arnett, W. D.; Lattanzio, J. C.; MacLean, B. T.

    2016-09-01

    Stellar models provide a vital basis for many aspects of astronomy and astrophysics. Recent advances in observational astronomy - through asteroseismology, precision photometry, high-resolution spectroscopy, and large-scale surveys - are placing stellar models under greater quantitative scrutiny than ever. The model limitations are being exposed and the next generation of stellar models is needed as soon as possible. The current uncertainties in the models propagate to the later phases of stellar evolution, hindering our understanding of stellar populations and chemical evolution. Here we give a brief overview of the evolution, importance, and substantial uncertainties of core helium burning stars in particular and then briefly discuss a range of methods, both theoretical and observational, that we are using to advance the modelling. This study uses observational data from from HST, VLT, AAT, Kepler, and supercomputing resources in Australia provided by the National Computational Infrastructure (NCI) and Pawsey Supercomputing Centre.

  17. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Esteban [University of Pittsburgh; Ni, Xiang [University of Illinois at Urbana-Champaign; Jones, Terry R [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  18. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  19. TSP:A Heterogeneous Multiprocessor Supercomputing System Based on i860XP

    Institute of Scientific and Technical Information of China (English)

    黄国勇; 李三立

    1994-01-01

    Numerous new RISC processors provide support for supercomputing.By using the “mini-Cray” i860 superscalar processor,an add-on board has been developed to boost the performance of a real time system.A parallel heterogeneous multiprocessor surercomputing system,TSP,is constructed.In this paper,we present the system design consideration and described the architecture of the TSP and its features.

  20. Application of Supercomputer Technologies for Simulation Of Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Vladimir Valentinovich Okrepilov

    2015-06-01

    Full Text Available To date, an extensive experience has been accumulated in investigation of problems related to quality, assessment of management systems, modeling of economic system sustainability. The performed studies have created a basis for development of a new research area — Economics of Quality. Its tools allow to use opportunities of model simulation for construction of the mathematical models adequately reflecting the role of quality in natural, technical, social regularities of functioning of the complex socio-economic systems. Extensive application and development of models, and also system modeling with use of supercomputer technologies, on our deep belief, will bring the conducted research of socio-economic systems to essentially new level. Moreover, the current scientific research makes a significant contribution to model simulation of multi-agent social systems and that is not less important, it belongs to the priority areas in development of science and technology in our country. This article is devoted to the questions of supercomputer technologies application in public sciences, first of all, — regarding technical realization of the large-scale agent-focused models (AFM. The essence of this tool is that owing to the power computer increase it has become possible to describe the behavior of many separate fragments of a difficult system, as socio-economic systems are. The article also deals with the experience of foreign scientists and practicians in launching the AFM on supercomputers, and also the example of AFM developed in CEMI RAS, stages and methods of effective calculating kernel display of multi-agent system on architecture of a modern supercomputer will be analyzed. The experiments on the basis of model simulation on forecasting the population of St. Petersburg according to three scenarios as one of the major factors influencing the development of socio-economic system and quality of life of the population are presented in the

  1. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  2. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  3. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk (Kitware, Inc., Clifton Park, NY); Fabian, Nathan; Marion, Patrick (Kitware, Inc., Clifton Park, NY); Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  4. Sandia`s network for Supercomputing `94: Linking the Los Alamos, Lawrence Livermore, and Sandia National Laboratories using switched multimegabit data service

    Energy Technology Data Exchange (ETDEWEB)

    Vahle, M.O.; Gossage, S.A.; Brenkosh, J.P. [Sandia National Labs., Albuquerque, NM (United States). Advanced Networking Integration Dept.

    1995-01-01

    Supercomputing `94, a high-performance computing and communications conference, was held November 14th through 18th, 1994 in Washington DC. For the past four years, Sandia National Laboratories has used this conference to showcase and focus its communications and networking endeavors. At the 1994 conference, Sandia built a Switched Multimegabit Data Service (SMDS) network running at 44.736 megabits per second linking its private SMDS network between its facilities in Albuquerque, New Mexico and Livermore, California to the convention center in Washington, D.C. For the show, the network was also extended from Sandia, New Mexico to Los Alamos National Laboratory and from Sandia, California to Lawrence Livermore National Laboratory. This paper documents and describes this network and how it was used at the conference.

  5. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    Science.gov (United States)

    Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.

  6. Integration of PanDA workload management system with Titan supercomputer at OLCF

    Science.gov (United States)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  7. Development of the general interpolants method for the CYBER 200 series of supercomputers

    Science.gov (United States)

    Stalnaker, J. F.; Robinson, M. A.; Spradley, L. W.; Kurzius, S. C.; Thoenes, J.

    1988-01-01

    The General Interpolants Method (GIM) is a 3-D, time-dependent, hybrid procedure for generating numerical analogs of the conservation laws. This study is directed toward the development and application of the GIM computer code for fluid dynamic research applications as implemented for the Cyber 200 series of supercomputers. An elliptic and quasi-parabolic version of the GIM code are discussed. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and an implicit finite difference scheme are also included.

  8. Accelerating Virtual High-Throughput Ligand Docking: current technology and case study on a petascale supercomputer.

    Science.gov (United States)

    Ellingson, Sally R; Dakshanamurthy, Sivanesan; Brown, Milton; Smith, Jeremy C; Baudry, Jerome

    2014-04-25

    In this paper we give the current state of high-throughput virtual screening. We describe a case study of using a task-parallel MPI (Message Passing Interface) version of Autodock4 [1], [2] to run a virtual high-throughput screen of one-million compounds on the Jaguar Cray XK6 Supercomputer at Oak Ridge National Laboratory. We include a description of scripts developed to increase the efficiency of the predocking file preparation and postdocking analysis. A detailed tutorial, scripts, and source code for this MPI version of Autodock4 are available online at http://www.bio.utk.edu/baudrylab/autodockmpi.htm.

  9. A New Hydrodynamic Model for Numerical Simulation of Interacting Galaxies on Intel Xeon Phi Supercomputers

    Science.gov (United States)

    Kulikov, Igor; Chernykh, Igor; Tutukov, Alexander

    2016-05-01

    This paper presents a new hydrodynamic model of interacting galaxies based on the joint solution of multicomponent hydrodynamic equations, first moments of the collisionless Boltzmann equation and the Poisson equation for gravity. Using this model, it is possible to formulate a unified numerical method for solving hyperbolic equations. This numerical method has been implemented for hybrid supercomputers with Intel Xeon Phi accelerators. The collision of spiral and disk galaxies considering the star formation process, supernova feedback and molecular hydrogen formation is shown as a simulation result.

  10. Research on Optimal Path of Data Migration among Multisupercomputer Centers

    Directory of Open Access Journals (Sweden)

    Gang Li

    2016-01-01

    Full Text Available Data collaboration between supercomputer centers requires a lot of data migration. In order to increase the efficiency of data migration, it is necessary to design optimal path of data transmission among multisupercomputer centers. Based on the situation that the target center which finished receiving data can be regarded as the new source center to migrate data to others, we present a parallel scheme for the data migration among multisupercomputer centers with different interconnection topologies using graph theory analysis and calculations. Finally, we verify that this method is effective via numeric simulation.

  11. Scheduling Supercomputers.

    Science.gov (United States)

    1983-02-01

    no task is scheduled with overlap. Let numpi be the total number of preemptions and idle slots of size at most to that are introduced. We see that if...no usable block remains on Qm-*, then numpi < m-k. Otherwise, numpi ! m-k-1. If j>n when this procedure terminates, then all tasks have been scheduled

  12. Grassroots Supercomputing

    CERN Multimedia

    Buchanan, Mark

    2005-01-01

    What started out as a way for SETI to plow through its piles or radio-signal data from deep space has turned into a powerful research tool as computer users acrosse the globe donate their screen-saver time to projects as diverse as climate-change prediction, gravitational-wave searches, and protein folding (4 pages)

  13. A Survey of User-Centered System Design for Supporting Online Collaborative Writing

    Directory of Open Access Journals (Sweden)

    Nani Sri Handayani

    2013-08-01

    Full Text Available Collaborative Writing (CW is a new emerging issue in education that must be addressed interdisciplinary. Nowadays there are a lot soft ware that can be use to support and enhance the collaboration in group writing. This paper presents the discussion about the recent user centre system design for supporting collaborative writing. Based on the taxonomy and collaborative writing and the problems appear in collaborative writing, we will proposed the required design of the User-Centered System Design (UCSD for CW software. The last part of this paper will be dedicated to examine the recent available CW soft wares based on the required designed proposed

  14. 超级计算中心功能与设计探讨%Discussion on the Function and Design of Super Computer Center

    Institute of Scientific and Technical Information of China (English)

    焦建欣

    2013-01-01

      超级计算中心是数据中心领域中的一个特殊的类型,本文以国家超级计算深圳中心为例,探讨了超级计算中心的功能及相关的设计。%Super computer center is a particular type in the field of data center. Based on the National Supercomputing Center in Shenzhen as an example, the paper discusses the function and related design for such supercomputing centers.

  15. Feynman diagrams sampling for quantum field theories on the QPACE 2 supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Rappl, Florian

    2016-08-01

    This work discusses the application of Feynman diagram sampling in quantum field theories. The method uses a computer simulation to sample the diagrammatic space obtained in a series expansion. For running large physical simulations powerful computers are obligatory, effectively splitting the thesis in two parts. The first part deals with the method of Feynman diagram sampling. Here the theoretical background of the method itself is discussed. Additionally, important statistical concepts and the theory of the strong force, quantum chromodynamics, are introduced. This sets the context of the simulations. We create and evaluate a variety of models to estimate the applicability of diagrammatic methods. The method is then applied to sample the perturbative expansion of the vertex correction. In the end we obtain the value for the anomalous magnetic moment of the electron. The second part looks at the QPACE 2 supercomputer. This includes a short introduction to supercomputers in general, as well as a closer look at the architecture and the cooling system of QPACE 2. Guiding benchmarks of the InfiniBand network are presented. At the core of this part, a collection of best practices and useful programming concepts are outlined, which enables the development of efficient, yet easily portable, applications for the QPACE 2 system.

  16. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration; Klimentov, Alexei; Oleynik, Danila; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently uses more than 100,000 cores at well over 100 Grid sites with a peak performance of 0.3 petaFLOPS, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real tim...

  17. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    De, Kaushik; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Vaniachine, Alexandre; Wenaus, Torre; Schovancova, Jaroslava

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real time, information about unused...

  18. PREFACE: HITES 2012: 'Horizons of Innovative Theories, Experiments, and Supercomputing in Nuclear Physics'

    Science.gov (United States)

    Hecht, K. T.

    2012-12-01

    This volume contains the contributions of the speakers of an international conference in honor of Jerry Draayer's 70th birthday, entitled 'Horizons of Innovative Theories, Experiments and Supercomputing in Nuclear Physics'. The list of contributors includes not only international experts in these fields, but also many former collaborators, former graduate students, and former postdoctoral fellows of Jerry Draayer, stressing innovative theories such as special symmetries and supercomputing, both of particular interest to Jerry. The organizers of the conference intended to honor Jerry Draayer not only for his seminal contributions in these fields, but also for his administrative skills at departmental, university, national and international level. Signed: Ted Hecht University of Michigan Conference photograph Scientific Advisory Committee Ani AprahamianUniversity of Notre Dame Baha BalantekinUniversity of Wisconsin Bruce BarrettUniversity of Arizona Umit CatalyurekOhio State Unversity David DeanOak Ridge National Laboratory Jutta Escher (Chair)Lawrence Livermore National Laboratory Jorge HirschUNAM, Mexico David RoweUniversity of Toronto Brad Sherill & Michigan State University Joel TohlineLouisiana State University Edward ZganjarLousiana State University Organizing Committee Jeff BlackmonLouisiana State University Mark CaprioUniversity of Notre Dame Tomas DytrychLouisiana State University Ana GeorgievaINRNE, Bulgaria Kristina Launey (Co-chair)Louisiana State University Gabriella PopaOhio University Zanesville James Vary (Co-chair)Iowa State University Local Organizing Committee Laura LinhardtLouisiana State University Charlie RascoLouisiana State University Karen Richard (Coordinator)Louisiana State University

  19. Graph visualization for the analysis of the structure and dynamics of extreme-scale supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Berkbigler, K. P. (Kathryn P.); Bush, B. W. (Brian W.); Davis, Kei,; Hoisie, A. (Adolfy); Smith, S. A. (Steve A.)

    2002-01-01

    We are exploring the development and application of information visualization techniques for the analysis of new extreme-scale supercomputer architectures. Modern supercomputers typically comprise very large clusters of commodity SMPs interconnected by possibly dense and often nonstandard networks. The scale, complexity, and inherent nonlocality of the structure and dynamics of this hardware, and the systems and applications distributed over it, challenge traditional analysis methods. As part of the a la carte team at Los Alamos National Laboratory, who are simulating these advanced architectures, we are exploring advanced visualization techniques and creating tools to provide intuitive exploration, discovery, and analysis of these simulations. This work complements existing and emerging algorithmic analysis tools. Here we gives background on the problem domain, a description of a prototypical computer architecture of interest (on the order of 10,000 processors connected by a quaternary fat-tree network), and presentations of several visualizations of the simulation data that make clear the flow of data in the interconnection network.

  20. Groundwater cooling of a supercomputer in Perth, Western Australia: hydrogeological simulations and thermal sustainability

    Science.gov (United States)

    Sheldon, Heather A.; Schaubs, Peter M.; Rachakonda, Praveen K.; Trefry, Michael G.; Reid, Lynn B.; Lester, Daniel R.; Metcalfe, Guy; Poulet, Thomas; Regenauer-Lieb, Klaus

    2015-12-01

    Groundwater cooling (GWC) is a sustainable alternative to conventional cooling technologies for supercomputers. A GWC system has been implemented for the Pawsey Supercomputing Centre in Perth, Western Australia. Groundwater is extracted from the Mullaloo Aquifer at 20.8 °C and passes through a heat exchanger before returning to the same aquifer. Hydrogeological simulations of the GWC system were used to assess its performance and sustainability. Simulations were run with cooling capacities of 0.5 or 2.5 Mega Watts thermal (MWth), with scenarios representing various combinations of pumping rate, injection temperature and hydrogeological parameter values. The simulated system generates a thermal plume in the Mullaloo Aquifer and overlying Superficial Aquifer. Thermal breakthrough (transfer of heat from injection to production wells) occurred in 2.7-4.3 years for a 2.5 MWth system. Shielding (reinjection of cool groundwater between the injection and production wells) resulted in earlier thermal breakthrough but reduced the rate of temperature increase after breakthrough, such that shielding was beneficial after approximately 5 years pumping. Increasing injection temperature was preferable to increasing flow rate for maintaining cooling capacity after thermal breakthrough. Thermal impacts on existing wells were small, with up to 10 wells experiencing a temperature increase ≥ 0.1 °C (largest increase 6 °C).

  1. OpenMC:Towards Simplifying Programming for TianHe Supercomputers

    Institute of Scientific and Technical Information of China (English)

    廖湘科; 杨灿群; 唐滔; 易会战; 王锋; 吴强; 薛京灵

    2014-01-01

    Modern petascale and future exascale systems are massively heterogeneous architectures. Developing produc-tive intra-node programming models is crucial toward addressing their programming challenge. We introduce a directive-based intra-node programming model, OpenMC, and show that this new model can achieve ease of programming, high performance, and the degree of portability desired for heterogeneous nodes, especially those in TianHe supercomputers. While existing models are geared towards offloading computations to accelerators (typically one), OpenMC aims to more uniformly and adequately exploit the potential offered by multiple CPUs and accelerators in a compute node. OpenMC achieves this by providing a unified abstraction of hardware resources as workers and facilitating the exploitation of asyn-chronous task parallelism on the workers. We present an overview of OpenMC, a prototyping implementation, and results from some initial comparisons with OpenMP and hand-written code in developing six applications on two types of nodes from TianHe supercomputers.

  2. Unique Methodologies for Nano/Micro Manufacturing Job Training Via Desktop Supercomputer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, Clyde [Northern Illinois Univ., DeKalb, IL (United States); Karonis, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Lurio, Laurence [Northern Illinois Univ., DeKalb, IL (United States); Piot, Philippe [Northern Illinois Univ., DeKalb, IL (United States); Xiao, Zhili [Northern Illinois Univ., DeKalb, IL (United States); Glatz, Andreas [Northern Illinois Univ., DeKalb, IL (United States); Pohlman, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Hou, Minmei [Northern Illinois Univ., DeKalb, IL (United States); Demir, Veysel [Northern Illinois Univ., DeKalb, IL (United States); Song, Jie [Northern Illinois Univ., DeKalb, IL (United States); Duffin, Kirk [Northern Illinois Univ., DeKalb, IL (United States); Johns, Mitrick [Northern Illinois Univ., DeKalb, IL (United States); Sims, Thomas [Northern Illinois Univ., DeKalb, IL (United States); Yin, Yanbin [Northern Illinois Univ., DeKalb, IL (United States)

    2012-11-21

    This project establishes an initiative in high speed (Teraflop)/large-memory desktop supercomputing for modeling and simulation of dynamic processes important for energy and industrial applications. It provides a training ground for employment of current students in an emerging field with skills necessary to access the large supercomputing systems now present at DOE laboratories. It also provides a foundation for NIU faculty to quantum leap beyond their current small cluster facilities. The funding extends faculty and student capability to a new level of analytic skills with concomitant publication avenues. The components of the Hewlett Packard computer obtained by the DOE funds create a hybrid combination of a Graphics Processing System (12 GPU/Teraflops) and a Beowulf CPU system (144 CPU), the first expandable via the NIU GAEA system to ~60 Teraflops integrated with a 720 CPU Beowulf system. The software is based on access to the NVIDIA/CUDA library and the ability through MATLAB multiple licenses to create additional local programs. A number of existing programs are being transferred to the CPU Beowulf Cluster. Since the expertise necessary to create the parallel processing applications has recently been obtained at NIU, this effort for software development is in an early stage. The educational program has been initiated via formal tutorials and classroom curricula designed for the coming year. Specifically, the cost focus was on hardware acquisitions and appointment of graduate students for a wide range of applications in engineering, physics and computer science.

  3. Supercomputer Assisted Generation of Machine Learning Agents for the Calibration of Building Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [ORNL; New, Joshua Ryan [ORNL; Edwards, Richard [ORNL

    2013-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrot pur- poses. EnergyPlus is the agship Department of Energy software that performs BEM for dierent types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manu- ally by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building en- ergy modeling unfeasible for smaller projects. In this paper, we describe the \\Autotune" research which employs machine learning algorithms to generate agents for the dierent kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of En- ergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-eective cali- bration of building models.

  4. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver; Wu, Kesheng

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports to slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.

  5. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; D' Azevedo, Eduardo [ORNL; Philip, Bobby [ORNL; Worley, Patrick H [ORNL

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  6. Federal Council on Science, Engineering and Technology: Committee on Computer Research and Applications, Subcommittee on Science and Engineering Computing: The US Supercomputer Industry

    Energy Technology Data Exchange (ETDEWEB)

    1987-12-01

    The Federal Coordinating Council on Science, Engineering, and Technology (FCCSET) Committee on Supercomputing was chartered by the Director of the Office of Science and Technology Policy in 1982 to examine the status of supercomputing in the United States and to recommend a role for the Federal Government in the development of this technology. In this study, the FCCSET Committee (now called the Subcommittee on Science and Engineering Computing of the FCCSET Committee on Computer Research and Applications) reports on the status of the supercomputer industry and addresses changes that have occured since issuance of the 1983 and 1985 reports. The review based upon periodic meetings with and site visits to supercomputer manufacturers and consultation with experts in high performance scientific computing. White papers have been contributed to this report by industry leaders and supercomputer experts.

  7. A Framework for HI Spectral Source Finding Using Distributed-Memory Supercomputing

    CERN Document Server

    Westerlund, Stefan

    2014-01-01

    The latest generation of radio astronomy interferometers will conduct all sky surveys with data products consisting of petabytes of spectral line data. Traditional approaches to identifying and parameterising the astrophysical sources within this data will not scale to datasets of this magnitude, since the performance of workstations will not keep up with the real-time generation of data. For this reason, it is necessary to employ high performance computing systems consisting of a large number of processors connected by a high-bandwidth network. In order to make use of such supercomputers substantial modifications must be made to serial source finding code. To ease the transition, this work presents the Scalable Source Finder Framework, a framework providing storage access, networking communication and data composition functionality, which can support a wide range of source finding algorithms provided they can be applied to subsets of the entire image. Additionally, the Parallel Gaussian Source Finder was imp...

  8. Diskless supercomputers: Scalable, reliable I/O for the Tera-Op technology base

    Science.gov (United States)

    Katz, Randy H.; Ousterhout, John K.; Patterson, David A.

    1993-01-01

    Computing is seeing an unprecedented improvement in performance; over the last five years there has been an order-of-magnitude improvement in the speeds of workstation CPU's. At least another order of magnitude seems likely in the next five years, to machines with 500 MIPS or more. The goal of the ARPA Teraop program is to realize even larger, more powerful machines, executing as many as a trillion operations per second. Unfortunately, we have seen no comparable breakthroughs in I/O performance; the speeds of I/O devices and the hardware and software architectures for managing them have not changed substantially in many years. We have completed a program of research to demonstrate hardware and software I/O architectures capable of supporting the kinds of internetworked 'visualization' workstations and supercomputers that will appear in the mid 1990s. The project had three overall goals: high performance, high reliability, and scalable, multipurpose system.

  9. An Optimized Parallel FDTD Topology for Challenging Electromagnetic Simulations on Supercomputers

    Directory of Open Access Journals (Sweden)

    Shugang Jiang

    2015-01-01

    Full Text Available It may not be a challenge to run a Finite-Difference Time-Domain (FDTD code for electromagnetic simulations on a supercomputer with more than 10 thousands of CPU cores; however, to make FDTD code work with the highest efficiency is a challenge. In this paper, the performance of parallel FDTD is optimized through MPI (message passing interface virtual topology, based on which a communication model is established. The general rules of optimal topology are presented according to the model. The performance of the method is tested and analyzed on three high performance computing platforms with different architectures in China. Simulations including an airplane with a 700-wavelength wingspan, and a complex microstrip antenna array with nearly 2000 elements are performed very efficiently using a maximum of 10240 CPU cores.

  10. Large-scale integrated super-computing platform for next generation virtual drug discovery.

    Science.gov (United States)

    Mitchell, Wayne; Matsumoto, Shunji

    2011-08-01

    Traditional drug discovery starts by experimentally screening chemical libraries to find hit compounds that bind to protein targets, modulating their activity. Subsequent rounds of iterative chemical derivitization and rescreening are conducted to enhance the potency, selectivity, and pharmacological properties of hit compounds. Although computational docking of ligands to targets has been used to augment the empirical discovery process, its historical effectiveness has been limited because of the poor correlation of ligand dock scores and experimentally determined binding constants. Recent progress in super-computing, coupled to theoretical insights, allows the calculation of the Gibbs free energy, and therefore accurate binding constants, for usually large ligand-receptor systems. This advance extends the potential of virtual drug discovery. A specific embodiment of the technology, integrating de novo, abstract fragment based drug design, sophisticated molecular simulation, and the ability to calculate thermodynamic binding constants with unprecedented accuracy, are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  12. Operational numerical weather prediction on a GPU-accelerated cluster supercomputer

    Science.gov (United States)

    Lapillonne, Xavier; Fuhrer, Oliver; Spörri, Pascal; Osuna, Carlos; Walser, André; Arteaga, Andrea; Gysi, Tobias; Rüdisühli, Stefan; Osterried, Katherine; Schulthess, Thomas

    2016-04-01

    The local area weather prediction model COSMO is used at MeteoSwiss to provide high resolution numerical weather predictions over the Alpine region. In order to benefit from the latest developments in computer technology the model was optimized and adapted to run on Graphical Processing Units (GPUs). Thanks to these model adaptations and the acquisition of a dedicated hybrid supercomputer a new set of operational applications have been introduced, COSMO-1 (1 km deterministic), COSMO-E (2 km ensemble) and KENDA (data assimilation) at MeteoSwiss. These new applications correspond to an increase of a factor 40x in terms of computational load as compared to the previous operational setup. We present an overview of the porting approach of the COSMO model to GPUs together with a detailed description of and performance results on the new hybrid Cray CS-Storm computer, Piz Kesch.

  13. A CPU/MIC Collaborated Parallel Framework for GROMACS on Tianhe-2 Supercomputer.

    Science.gov (United States)

    Peng, Shaoliang; Yang, Shunyun; Su, Wenhe; Zhang, Xiaoyu; Zhang, Tenglilang; Liu, Weiguo; Zhao, Xingming

    2017-06-16

    Molecular Dynamics (MD) is the simulation of the dynamic behavior of atoms and molecules. As the most popular software for molecular dynamics, GROMACS cannot work on large-scale data because of limit computing resources. In this paper, we propose a CPU and Intel® Xeon Phi Many Integrated Core (MIC) collaborated parallel framework to accelerate GROMACS using the offload mode on a MIC coprocessor, with which the performance of GROMACS is improved significantly, especially with the utility of Tianhe-2 supercomputer. Furthermore, we optimize GROMACS so that it can run on both the CPU and MIC at the same time. In addition, we accelerate multi-node GROMACS so that it can be used in practice. Benchmarking on real data, our accelerated GROMACS performs very well and reduces computation time significantly. Source code: https://github.com/tianhe2/gromacs-mic.

  14. Mixed precision numerical weather prediction on hybrid GPU-CPU supercomputers

    Science.gov (United States)

    Lapillonne, Xavier; Osuna, Carlos; Spoerri, Pascal; Osterried, Katherine; Charpilloz, Christophe; Fuhrer, Oliver

    2017-04-01

    A new version of the climate and weather model COSMO that runs faster on traditional high performance computing systems with CPUs as well as on heterogeneous architectures using graphics processing units (GPUs) has been developed. The model was in addition adapted to be able to run in "single precision" mode. After discussing the key changes introduced in this new model version and the tools used in the porting approach, we present 3 applications, namely the MeteoSwiss operational weather prediction system, COSMO-LEPS and the CALMO project, which already take advantage of the performance improvement, up to a factor 4, by running on GPU system and using the single precision mode. We discuss how the code changes open new perspectives for scientific research and can enable researchers to get access to a new class of supercomputers.

  15. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  16. Modern Gyrokinetic Particle-In-Cell Simulation of Fusion Plasmas on Top Supercomputers

    CERN Document Server

    Wang, Bei; Tang, William; Ibrahim, Khaled; Madduri, Kamesh; Williams, Samuel; Oliker, Leonid

    2015-01-01

    The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability of the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon...

  17. Dawning Nebulae: A PetaFLOPS Supercomputer with a Heterogeneous Structure

    Institute of Scientific and Technical Information of China (English)

    Ning-Hui Sun; Jing Xing; Zhi-Gang Huo; Guang-Ming Tan; Jin Xiong; Bo Li; Can Ma

    2011-01-01

    Dawning Nebulae is a heterogeneous system composed of 9280 multi-core x86 CPUs and 4640 NVIDIA Fermi GPUs. With a Linpack performance of 1.271 petaFLOPS, it was ranked the second in the TOP500 List released in June 2010. In this paper, key issues in the system design of Dawning Nebulae are introduced. System tuning methodologies aiming at petaFLOPS Linpack result are presented, including algorithmic optimization and communication improvement. The design of its file I/O subsystem, including HVFS and the underlying DCFS3, is also described. Performance evaluations show that the Linpack efficiency of each node reaches 69.89%, and 1024-node aggregate read and write bandwidths exceed 100 GB/s and 70 GB/s respectively. The success of Dawning Nebulae has demonstrated the viability of CPU/GPU heterogeneous structure for future designs of supercomputers.

  18. Scalability Test of multiscale fluid-platelet model for three top supercomputers

    Science.gov (United States)

    Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny

    2016-07-01

    We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources.

  19. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    Science.gov (United States)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  20. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  1. Seismic Sensors to Supercomputers: Internet Mapping and Computational Tools for Teaching and Learning about Earthquakes and the Structure of the Earth from Seismology

    Science.gov (United States)

    Meertens, C. M.; Seber, D.; Hamburger, M.

    2004-12-01

    The Internet has become an integral resource in the classrooms and homes of teachers and students. Widespread Web-access to seismic data and analysis tools enhances opportunities for teaching and learning about earthquakes and the structure of the earth from seismic tomography. We will present an overview and demonstration of the UNAVCO Voyager Java- and Javascript-based mapping tools (jules.unavco.org) and the Cornell University/San Diego Supercomputer Center (www.discoverourearth.org) Java-based data analysis and mapping tools. These map tools, datasets, and related educational websites have been developed and tested by collaborative teams of scientific programmers, research scientists, and educators. Dual-use by research and education communities ensures persistence of the tools and data, motivates on-going development, and encourages fresh content. With these tools are curricular materials and on-going evaluation processes that are essential for an effective application in the classroom. The map tools provide not only seismological data and tomographic models of the earth's interior, but also a wealth of associated map data such as topography, gravity, sea-floor age, plate tectonic motions and strain rates determined from GPS geodesy, seismic hazard maps, stress, and a host of geographical data. These additional datasets help to provide context and enable comparisons leading to an integrated view of the planet and the on-going processes that shape it. Emerging Cyberinfrastructure projects such as the NSF-funded GEON Information Technology Research project (www.geongrid.org) are developing grid/web services, advanced visualization software, distributed databases and data sharing methods, concept-based search mechanisms, and grid-computing resources for earth science and education. These developments in infrastructure seek to extend the access to data and to complex modeling tools from the hands of a few researchers to a much broader set of users. The GEON

  2. NASA Center for Climate Simulation (NCCS) Presentation

    Science.gov (United States)

    Webster, William P.

    2012-01-01

    The NASA Center for Climate Simulation (NCCS) offers integrated supercomputing, visualization, and data interaction technologies to enhance NASA's weather and climate prediction capabilities. It serves hundreds of users at NASA Goddard Space Flight Center, as well as other NASA centers, laboratories, and universities across the US. Over the past year, NCCS has continued expanding its data-centric computing environment to meet the increasingly data-intensive challenges of climate science. We doubled our Discover supercomputer's peak performance to more than 800 teraflops by adding 7,680 Intel Xeon Sandy Bridge processor-cores and most recently 240 Intel Xeon Phi Many Integrated Core (MIG) co-processors. A supercomputing-class analysis system named Dali gives users rapid access to their data on Discover and high-performance software including the Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT), with interfaces from user desktops and a 17- by 6-foot visualization wall. NCCS also is exploring highly efficient climate data services and management with a new MapReduce/Hadoop cluster while augmenting its data distribution to the science community. Using NCCS resources, NASA completed its modeling contributions to the Intergovernmental Panel on Climate Change (IPCG) Fifth Assessment Report this summer as part of the ongoing Coupled Modellntercomparison Project Phase 5 (CMIP5). Ensembles of simulations run on Discover reached back to the year 1000 to test model accuracy and projected climate change through the year 2300 based on four different scenarios of greenhouse gases, aerosols, and land use. The data resulting from several thousand IPCC/CMIP5 simulations, as well as a variety of other simulation, reanalysis, and observationdatasets, are available to scientists and decision makers through an enhanced NCCS Earth System Grid Federation Gateway. Worldwide downloads have totaled over 110 terabytes of data.

  3. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  4. The company's mainframes join CERN's openlab for DataGrid apps and are pivotal in a new $22 million Supercomputer in the U.K.

    CERN Multimedia

    2002-01-01

    Hewlett-Packard has installed a supercomputer system valued at more than $22 million at the Wellcome Trust Sanger Institute (WTSI) in the U.K. HP has also joined the CERN openlab for DataGrid applications (1 page).

  5. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    Science.gov (United States)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  6. A Parallel Supercomputer Implementation of a Biological Inspired Neural Network and its use for Pattern Recognition

    Science.gov (United States)

    de Ladurantaye, Vincent; Lavoie, Jean; Bergeron, Jocelyn; Parenteau, Maxime; Lu, Huizhong; Pichevar, Ramin; Rouat, Jean

    2012-02-01

    A parallel implementation of a large spiking neural network is proposed and evaluated. The neural network implements the binding by synchrony process using the Oscillatory Dynamic Link Matcher (ODLM). Scalability, speed and performance are compared for 2 implementations: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) running on clusters of multicore supercomputers and NVIDIA graphical processing units respectively. A global spiking list that represents at each instant the state of the neural network is described. This list indexes each neuron that fires during the current simulation time so that the influence of their spikes are simultaneously processed on all computing units. Our implementation shows a good scalability for very large networks. A complex and large spiking neural network has been implemented in parallel with success, thus paving the road towards real-life applications based on networks of spiking neurons. MPI offers a better scalability than CUDA, while the CUDA implementation on a GeForce GTX 285 gives the best cost to performance ratio. When running the neural network on the GTX 285, the processing speed is comparable to the MPI implementation on RQCHP's Mammouth parallel with 64 notes (128 cores).

  7. Massively-parallel electrical-conductivity imaging of hydrocarbonsusing the Blue Gene/L supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Commer, M.; Newman, G.A.; Carazzone, J.J.; Dickens, T.A.; Green,K.E.; Wahrmund, L.A.; Willen, D.E.; Shiu, J.

    2007-05-16

    Large-scale controlled source electromagnetic (CSEM)three-dimensional (3D) geophysical imaging is now receiving considerableattention for electrical conductivity mapping of potential offshore oiland gas reservoirs. To cope with the typically large computationalrequirements of the 3D CSEM imaging problem, our strategies exploitcomputational parallelism and optimized finite-difference meshing. Wereport on an imaging experiment, utilizing 32,768 tasks/processors on theIBM Watson Research Blue Gene/L (BG/L) supercomputer. Over a 24-hourperiod, we were able to image a large scale marine CSEM field data setthat previously required over four months of computing time ondistributed clusters utilizing 1024 tasks on an Infiniband fabric. Thetotal initial data misfit could be decreased by 67 percent within 72completed inversion iterations, indicating an electrically resistiveregion in the southern survey area below a depth of 1500 m below theseafloor. The major part of the residual misfit stems from transmitterparallel receiver components that have an offset from the transmittersail line (broadside configuration). Modeling confirms that improvedbroadside data fits can be achieved by considering anisotropic electricalconductivities. While delivering a satisfactory gross scale image for thedepths of interest, the experiment provides important evidence for thenecessity of discriminating between horizontal and verticalconductivities for maximally consistent 3D CSEM inversions.

  8. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Swaminarayan, Sriram [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Kadau, Kai [Los Alamos National Laboratory; Fossum, Gordon C [IBM CORPORATION

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  9. Cell-based Adaptive Mesh Refinement on the GPU with Applications to Exascale Supercomputing

    Science.gov (United States)

    Trujillo, Dennis; Robey, Robert; Davis, Neal; Nicholaeff, David

    2011-10-01

    We present an OpenCL implementation of a cell-based adaptive mesh refinement (AMR) scheme for the shallow water equations. The challenges associated with ensuring the locality of algorithm architecture to fully exploit the massive number of parallel threads on the GPU is discussed. This includes a proof of concept that a cell-based AMR code can be effectively implemented, even on a small scale, in the memory and threading model provided by OpenCL. Additionally, the program requires dynamic memory in order to properly implement the mesh; as this is not supported in the OpenCL 1.1 standard, a combination of CPU memory management and GPU computation effectively implements a dynamic memory allocation scheme. Load balancing is achieved through a new stencil-based implementation of a space-filling curve, eliminating the need for a complete recalculation of the indexing on the mesh. A cartesian grid hash table scheme to allow fast parallel neighbor accesses is also discussed. Finally, the relative speedup of the GPU-enabled AMR code is compared to the original serial version. We conclude that parallelization using the GPU provides significant speedup for typical numerical applications and is feasible for scientific applications in the next generation of supercomputing.

  10. Distributed computing as a virtual supercomputer: Tools to run and manage large-scale BOINC simulations

    Science.gov (United States)

    Giorgino, Toni; Harvey, M. J.; de Fabritiis, Gianni

    2010-08-01

    Distributed computing (DC) projects tackle large computational problems by exploiting the donated processing power of thousands of volunteered computers, connected through the Internet. To efficiently employ the computational resources of one of world's largest DC efforts, GPUGRID, the project scientists require tools that handle hundreds of thousands of tasks which run asynchronously and generate gigabytes of data every day. We describe RBoinc, an interface that allows computational scientists to embed the DC methodology into the daily work-flow of high-throughput experiments. By extending the Berkeley Open Infrastructure for Network Computing (BOINC), the leading open-source middleware for current DC projects, with mechanisms to submit and manage large-scale distributed computations from individual workstations, RBoinc turns distributed grids into cost-effective virtual resources that can be employed by researchers in work-flows similar to conventional supercomputers. The GPUGRID project is currently using RBoinc for all of its in silico experiments based on molecular dynamics methods, including the determination of binding free energies and free energy profiles in all-atom models of biomolecules.

  11. A user-friendly web portal for T-Coffee on supercomputers

    Directory of Open Access Journals (Sweden)

    Koetsier Jos

    2011-05-01

    Full Text Available Abstract Background Parallel T-Coffee (PTC was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

  12. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan; Mills, Richard T.

    2012-04-18

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors per realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.

  13. A user-friendly web portal for T-Coffee on supercomputers.

    Science.gov (United States)

    Rius, Josep; Cores, Fernando; Solsona, Francesc; van Hemert, Jano I; Koetsier, Jos; Notredame, Cedric

    2011-05-12

    Parallel T-Coffee (PTC) was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

  14. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  15. Bringing ATLAS production to HPC resources - A use case with the Hydra supercomputer of the Max Planck Society

    Science.gov (United States)

    Kennedy, J. A.; Kluth, S.; Mazzaferro, L.; Walker, Rodney

    2015-12-01

    The possible usage of HPC resources by ATLAS is now becoming viable due to the changing nature of these systems and it is also very attractive due to the need for increasing amounts of simulated data. In recent years the architecture of HPC systems has evolved, moving away from specialized monolithic systems, to a more generic linux type platform. This change means that the deployment of non HPC specific codes has become much easier. The timing of this evolution perfectly suits the needs of ATLAS and opens a new window of opportunity. The ATLAS experiment at CERN will begin a period of high luminosity data taking in 2015. This high luminosity phase will be accompanied by a need for increasing amounts of simulated data which is expected to exceed the capabilities of the current Grid infrastructure. ATLAS aims to address this need by opportunistically accessing resources such as cloud and HPC systems. This paper presents the results of a pilot project undertaken by ATLAS and the MPP/RZG to provide access to the HYDRA supercomputer facility. Hydra is the supercomputer of the Max Planck Society, it is a linux based supercomputer with over 80000 cores and 4000 physical nodes located at the RZG near Munich. This paper describes the work undertaken to integrate Hydra into the ATLAS production system by using the Nordugrid ARC-CE and other standard Grid components. The customization of these components and the strategies for HPC usage are discussed as well as possibilities for future directions.

  16. Influence of Earth crust composition on continental collision style in Precambrian conditions: Results of supercomputer modelling

    Science.gov (United States)

    Zavyalov, Sergey; Zakharov, Vladimir

    2016-04-01

    A number of issues concerning Precambrian geodynamics still remain unsolved because of uncertainity of many physical (thermal regime, lithosphere thickness, crust thickness, etc.) and chemical (mantle composition, crust composition) parameters, which differed considerably comparing to the present day values. In this work, we show results of numerical supercomputations based on petrological and thermomechanical 2D model, which simulates the process of collision between two continental plates, each 80-160 km thick, with various convergence rates ranging from 5 to 15 cm/year. In the model, the upper mantle temperature is 150-200 ⁰C higher than the modern value, while the continental crust radiogenic heat production is higher than the present value by the factor of 1.5. These settings correspond to Archean conditions. The present study investigates the dependence of collision style on various continental crust parameters, especially on crust composition. The 3 following archetypal settings of continental crust composition are examined: 1) completely felsic continental crust; 2) basic lower crust and felsic upper crust; 3) basic upper crust and felsic lower crust (hereinafter referred to as inverted crust). Modeling results show that collision with completely felsic crust is unlikely. In the case of basic lower crust, a continental subduction and subsequent continental rocks exhumation can take place. Therefore, formation of ultra-high pressure metamorphic rocks is possible. Continental subduction also occurs in the case of inverted continental crust. However, in the latter case, the exhumation of felsic rocks is blocked by upper basic layer and their subsequent interaction depends on their volume ratio. Thus, if the total inverted crust thickness is about 15 km and the thicknesses of the two layers are equal, felsic rocks cannot be exhumed. If the total thickness is 30 to 40 km and that of the felsic layer is 20 to 25 km, it breaks through the basic layer leading to

  17. The PVM (Parallel Virtual Machine) system: Supercomputer level concurrent computation on a network of IBM RS/6000 power stations

    Energy Technology Data Exchange (ETDEWEB)

    Sunderam, V.S. (Emory Univ., Atlanta, GA (USA). Dept. of Mathematics and Computer Science); Geist, G.A. (Oak Ridge National Lab., TN (USA))

    1991-01-01

    The PVM (Parallel Virtual Machine) system enables supercomputer level concurrent computations to be performed on interconnected networks of heterogeneous computer systems. Specifically, a network of 13 IBM RS/6000 powerstations has been successfully used to execute production quality runs of superconductor modeling codes at more than 250 Mflops. This work demonstrates the effectiveness of cooperative concurrent processing for high performance applications, and shows that supercomputer level computations may be attained at a fraction of the cost on distributed computing platforms. This paper describes the PVM programming environment and user facilities, as they apply to hardware platforms comprising a network of IBM RS/6000 powerstations. The salient design features of PVM will be discussed; including heterogeneity, scalability, multilanguage support, provisions for fault tolerance, the use of multiprocessors and scalar machines, an interactive graphical front end, and support for profiling, tracing, and visual analysis. The PVM system has been used extensively, and a range of production quality concurrent applications have been successfully executed using PVM on a variety of networked platforms. The paper will mention representative examples, and discuss two in detail. The first is a material sciences problem that was originally developed on a Cray 2. This application code calculates the electronic structure of metallic alloys from first principles and is based on the KKR-CPA algorithm. The second is a molecular dynamics simulation for calculating materials properties. Performance results for both applicants on networks of RS/6000 powerstations will be presented, and accompanied by discussions of the other advantages of PVM and its potential as a complement or alternative to conventional supercomputers.

  18. Calculation of Free Energy Landscape in Multi-Dimensions with Hamiltonian-Exchange Umbrella Sampling on Petascale Supercomputer.

    Science.gov (United States)

    Jiang, Wei; Luo, Yun; Maragliano, Luca; Roux, Benoît

    2012-11-13

    An extremely scalable computational strategy is described for calculations of the potential of mean force (PMF) in multidimensions on massively distributed supercomputers. The approach involves coupling thousands of umbrella sampling (US) simulation windows distributed to cover the space of order parameters with a Hamiltonian molecular dynamics replica-exchange (H-REMD) algorithm to enhance the sampling of each simulation. In the present application, US/H-REMD is carried out in a two-dimensional (2D) space and exchanges are attempted alternatively along the two axes corresponding to the two order parameters. The US/H-REMD strategy is implemented on the basis of parallel/parallel multiple copy protocol at the MPI level, and therefore can fully exploit computing power of large-scale supercomputers. Here the novel technique is illustrated using the leadership supercomputer IBM Blue Gene/P with an application to a typical biomolecular calculation of general interest, namely the binding of calcium ions to the small protein Calbindin D9k. The free energy landscape associated with two order parameters, the distance between the ion and its binding pocket and the root-mean-square deviation (rmsd) of the binding pocket relative the crystal structure, was calculated using the US/H-REMD method. The results are then used to estimate the absolute binding free energy of calcium ion to Calbindin D9k. The tests demonstrate that the 2D US/H-REMD scheme greatly accelerates the configurational sampling of the binding pocket, thereby improving the convergence of the potential of mean force calculation.

  19. Scalable geocomputation: evolving an environmental model building platform from single-core to supercomputers

    Science.gov (United States)

    Schmitz, Oliver; de Jong, Kor; Karssenberg, Derek

    2017-04-01

    There is an increasing demand to run environmental models on a big scale: simulations over large areas at high resolution. The heterogeneity of available computing hardware such as multi-core CPUs, GPUs or supercomputer potentially provides significant computing power to fulfil this demand. However, this requires detailed knowledge of the underlying hardware, parallel algorithm design and the implementation thereof in an efficient system programming language. Domain scientists such as hydrologists or ecologists often lack this specific software engineering knowledge, their emphasis is (and should be) on exploratory building and analysis of simulation models. As a result, models constructed by domain specialists mostly do not take full advantage of the available hardware. A promising solution is to separate the model building activity from software engineering by offering domain specialists a model building framework with pre-programmed building blocks that they combine to construct a model. The model building framework, consequently, needs to have built-in capabilities to make full usage of the available hardware. Developing such a framework providing understandable code for domain scientists and being runtime efficient at the same time poses several challenges on developers of such a framework. For example, optimisations can be performed on individual operations or the whole model, or tasks need to be generated for a well-balanced execution without explicitly knowing the complexity of the domain problem provided by the modeller. Ideally, a modelling framework supports the optimal use of available hardware whichsoever combination of model building blocks scientists use. We demonstrate our ongoing work on developing parallel algorithms for spatio-temporal modelling and demonstrate 1) PCRaster, an environmental software framework (http://www.pcraster.eu) providing spatio-temporal model building blocks and 2) parallelisation of about 50 of these building blocks using

  20. Comparison of neuronal spike exchange methods on a Blue Gene/P supercomputer.

    Science.gov (United States)

    Hines, Michael; Kumar, Sameer; Schürmann, Felix

    2011-01-01

    For neural network simulations on parallel machines, interprocessor spike communication can be a significant portion of the total simulation time. The performance of several spike exchange methods using a Blue Gene/P (BG/P) supercomputer has been tested with 8-128 K cores using randomly connected networks of up to 32 M cells with 1 k connections per cell and 4 M cells with 10 k connections per cell, i.e., on the order of 4·10(10) connections (K is 1024, M is 1024(2), and k is 1000). The spike exchange methods used are the standard Message Passing Interface (MPI) collective, MPI_Allgather, and several variants of the non-blocking Multisend method either implemented via non-blocking MPI_Isend, or exploiting the possibility of very low overhead direct memory access (DMA) communication available on the BG/P. In all cases, the worst performing method was that using MPI_Isend due to the high overhead of initiating a spike communication. The two best performing methods-the persistent Multisend method using the Record-Replay feature of the Deep Computing Messaging Framework DCMF_Multicast; and a two-phase multisend in which a DCMF_Multicast is used to first send to a subset of phase one destination cores, which then pass it on to their subset of phase two destination cores-had similar performance with very low overhead for the initiation of spike communication. Departure from ideal scaling for the Multisend methods is almost completely due to load imbalance caused by the large variation in number of cells that fire on each processor in the interval between synchronization. Spike exchange time itself is negligible since transmission overlaps with computation and is handled by a DMA controller. We conclude that ideal performance scaling will be ultimately limited by imbalance between incoming processor spikes between synchronization intervals. Thus, counterintuitively, maximization of load balance requires that the distribution of cells on processors should not reflect

  1. Proposal of a Desk-Side Supercomputer with Reconfigurable Data-Paths Using Rapid Single-Flux-Quantum Circuits

    Science.gov (United States)

    Takagi, Naofumi; Murakami, Kazuaki; Fujimaki, Akira; Yoshikawa, Nobuyuki; Inoue, Koji; Honda, Hiroaki

    We propose a desk-side supercomputer with large-scale reconfigurable data-paths (LSRDPs) using superconducting rapid single-flux-quantum (RSFQ) circuits. It has several sets of computing unit which consists of a general-purpose microprocessor, an LSRDP and a memory. An LSRDP consists of a lot of, e. g., a few thousand, floating-point units (FPUs) and operand routing networks (ORNs) which connect the FPUs. We reconfigure the LSRDP to fit a computation, i. e., a group of floating-point operations, which appears in a ‘for’ loop of numerical programs by setting the route in ORNs before the execution of the loop. We propose to implement the LSRDPs by RSFQ circuits. The processors and the memories can be implemented by semiconductor technology. We expect that a 10 TFLOPS supercomputer, as well as a refrigerating engine, will be housed in a desk-side rack, using a near-future RSFQ process technology, such as 0.35μm process.

  2. COMPUTATIONAL SCIENCE CENTER

    Energy Technology Data Exchange (ETDEWEB)

    DAVENPORT, J.

    2006-11-01

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to

  3. Coherent 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an Optimal Supercomputer Optical Switch Fabric

    DEFF Research Database (Denmark)

    Karinou, Fotini; Borkowski, Robert; Zibar, Darko

    2013-01-01

    We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates....

  4. Car2x with software defined networks, network functions virtualization and supercomputers technical and scientific preparations for the Amsterdam Arena telecoms fieldlab

    NARCIS (Netherlands)

    Meijer R.J.; Cushing R.; De Laat C.; Jackson P.; Klous S.; Koning R.; Makkes M.X.; Meerwijk A.

    2015-01-01

    In the invited talk 'Car2x with SDN, NFV and supercomputers' we report about how our past work with SDN [1, 2] allows the design of a smart mobility fieldlab in the huge parking lot the Amsterdam Arena. We explain how we can engineer and test software that handle the complex conditions of the Car2X

  5. Car2x with software defined networks, network functions virtualization and supercomputers technical and scientific preparations for the Amsterdam Arena telecoms fieldlab

    NARCIS (Netherlands)

    Meijer R.J.; Cushing R.; De Laat C.; Jackson P.; Klous S.; Koning R.; Makkes M.X.; Meerwijk A.

    2015-01-01

    In the invited talk 'Car2x with SDN, NFV and supercomputers' we report about how our past work with SDN [1, 2] allows the design of a smart mobility fieldlab in the huge parking lot the Amsterdam Arena. We explain how we can engineer and test software that handle the complex conditions of the Car2X

  6. Nonperturbative Lattice Simulation of High Multiplicity Cross Section Bound in $\\phi^4_3$ on Beowulf Supercomputer

    CERN Document Server

    Charng, Y Y

    2001-01-01

    In this thesis, we have investigated the possibility of large cross sections at large multiplicity in weakly coupled three dimensional $\\phi^4$ theory using Monte Carlo Simulation methods. We have built a Beowulf Supercomputer for this purpose. We use spectral function sum rules to derive a bound on the total cross section where the quantity determining the bound can be measured by Monte Carlo simulation in Euclidean space. We determine the critical threshold energy for large high multiplicity cross section according to the analysis of M.B. Volosion and E.N. Argyres, R.M.P. Kleiss, and C.G. Papadopoulos. We compare the simulation results with the perturbation results and see no evidence for large cross section in the range where tree diagram estimates suggest they should exist.

  7. Performance Evaluation of an Intel Haswell- and Ivy Bridge-Based Supercomputer Using Scientific and Engineering Applications

    Science.gov (United States)

    Saini, Subhash; Hood, Robert T.; Chang, Johnny; Baron, John

    2016-01-01

    We present a performance evaluation conducted on a production supercomputer of the Intel Xeon Processor E5- 2680v3, a twelve-core implementation of the fourth-generation Haswell architecture, and compare it with Intel Xeon Processor E5-2680v2, an Ivy Bridge implementation of the third-generation Sandy Bridge architecture. Several new architectural features have been incorporated in Haswell including improvements in all levels of the memory hierarchy as well as improvements to vector instructions and power management. We critically evaluate these new features of Haswell and compare with Ivy Bridge using several low-level benchmarks including subset of HPCC, HPCG and four full-scale scientific and engineering applications. We also present a model to predict the performance of HPCG and Cart3D within 5%, and Overflow within 10% accuracy.

  8. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    Science.gov (United States)

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .

  9. Activity report of Computing Research Center

    Energy Technology Data Exchange (ETDEWEB)

    1997-07-01

    On April 1997, National Laboratory for High Energy Physics (KEK), Institute of Nuclear Study, University of Tokyo (INS), and Meson Science Laboratory, Faculty of Science, University of Tokyo began to work newly as High Energy Accelerator Research Organization after reconstructing and converting their systems, under aiming at further development of a wide field of accelerator science using a high energy accelerator. In this Research Organization, Applied Research Laboratory is composed of four Centers to execute assistance of research actions common to one of the Research Organization and their relating research and development (R and D) by integrating the present four centers and their relating sections in Tanashi. What is expected for the assistance of research actions is not only its general assistance but also its preparation and R and D of a system required for promotion and future plan of the research. Computer technology is essential to development of the research and can communize for various researches in the Research Organization. On response to such expectation, new Computing Research Center is required for promoting its duty by coworking and cooperating with every researchers at a range from R and D on data analysis of various experiments to computation physics acting under driving powerful computer capacity such as supercomputer and so forth. Here were described on report of works and present state of Data Processing Center of KEK at the first chapter and of the computer room of INS at the second chapter and on future problems for the Computing Research Center. (G.K.)

  10. Integration Of PanDA Workload Management System With Supercomputers for ATLAS

    CERN Document Server

    Oleynik, Danila; The ATLAS collaboration; De, Kaushik; Wenaus, Torre; Maeno, Tadashi; Barreiro Megino, Fernando Harald; Nilsson, Paul; Guan, Wen; Panitkin, Sergey

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production ANd Distributed Analysis system) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more t...

  11. Excel Center

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Citigroup,one of the World top 500 companies,has now settled in Excel Center,Financial Street. The opening ceremony of Excel Center and the entry ceremony of Citigroup in the center were held on March 31.Government leaders of Xicheng District,the Excel CEO and the heads of Asia-Pacific Region leaders of Citibank all participated in the ceremony.

  12. Center for Momentum Transport and Flow Organization (CMTFO). Final technical report

    Energy Technology Data Exchange (ETDEWEB)

    Tynan, George R. [University of California, San Diego, CA (United States); Diamond, P. H. [University of California, San Diego, CA (United States); Ji, H. [Princeton Plasma Physics Lab., NJ (United States); Forest, C. B. [Univ. of Wisconsin, Madison, WI (United States); Terry, P. W. [Univ. of Wisconsin, Madison, WI (United States); Munsat, T. [Univ. of Colorado, Boulder, CO (United States); Brummell, N. [Univ. of California, Santa Cruz (United States)

    2013-07-29

    The Center for Momentum Transport and Flow Organization (CMTFO) is a DOE Plasma Science Center formed in late 2009 to focus on the general principles underlying momentum transport in magnetic fusion and astrophysical systems. It is composed of funded researchers from UCSD, UW Madison, U. Colorado, PPPL. As of 2011, UCSD supported postdocs are collaborating at MIT/Columbia and UC Santa Cruz and beginning in 2012, will also be based at PPPL. In the initial startup period, the Center supported the construction of two basic experiments at PPPL and UW Madison to focus on accretion disk hydrodynamic instabilities and solar physics issues. We now have computational efforts underway focused on understanding recent experimental tests of dynamos, solar tachocline physics, intrinsic rotation in tokamak plasmas and L-H transition physics in tokamak devices. In addition, we have the basic experiments discussed above complemented by work on a basic linear plasma device at UCSD and a collaboration at the LAPD located at UCLA. We are also performing experiments on intrinsic rotation and L-H transition physics in the DIII-D, NSTX, C-Mod, HBT EP, HL-2A, and EAST tokamaks in the US and China, and expect to begin collaborations on K-STAR in the coming year. Center funds provide support to over 10 postdocs and graduate students each year, who work with 8 senior faculty and researchers at their respective institutions. The Center has sponsored a mini-conference at the APS DPP 2010 meeting, and co-sponsored the recent Festival de Theorie (2011) with the CEA in Cadarache, and will co-sponsor a Winter School in January 2012 in collaboration with the CMSO-UW Madison. Center researchers have published over 50 papers in the peer reviewed literature, and given over 10 talks at major international meetings. In addition, the Center co-PI, Professor Patrick Diamond, shared the 2011 Alfven Prize at the EPS meeting. Key scientific results from this startup period include initial simulations of the

  13. Final Technical Report for the Center for Momentum Transport and Flow Organization (CMTFO)

    Energy Technology Data Exchange (ETDEWEB)

    Forest, Cary B. [University of Wisconsin-Madison; Tynan, George R. [University of California San Diego

    2013-07-29

    The Center for Momentum Transport and Flow Organization (CMTFO) is a DOE Plasma Science Center formed in late 2009 to focus on the general principles underlying momentum transport in magnetic fusion and astrophysical systems. It is composed of funded researchers from UCSD, UW Madison, U. Colorado, PPPL. As of 2011, UCSD supported postdocs are collaborating at MIT/Columbia and UC Santa Cruz and beginning in 2012, will also be based at PPPL. In the initial startup period, the Center supported the construction of two basic experiments at PPPL and UW Madison to focus on accretion disk hydrodynamic instabilities and solar physics issues. We now have computational efforts underway focused on understanding recent experimental tests of dynamos, solar tacholine physics, intrinsic rotation in tokamak plasmas and L-H transition physics in tokamak devices. In addition, we have the basic experiments discussed above complemented by work on a basic linear plasma device at UCSD and a collaboration at the LAPD located at UCLA. We are also performing experiments on intrinsic rotation and L-H transition physics in the DIII-D, NSTX, C-Mod, HBT EP, HL-2A, and EAST tokamaks in the US and China, and expect to begin collaborations on K-STAR in the coming year. Center funds provide support to over 10 postdocs and graduate students each year, who work with 8 senior faculty and researchers at their respective institutions. The Center has sponsored a mini-conference at the APS DPP 2010 meeting, and co-sponsored the recent Festival de Theorie (2011) with the CEA in Cadarache, and will co-sponsor a Winter School in January 2012 in collaboration with the CMSO-UW Madison. Center researchers have published over 50 papers in the peer reviewed literature, and given over 10 talks at major international meetings. In addition, the Center co-PI, Professor Patrick Diamond, shared the 2011 Alfven Prize at the EPS meeting. Key scientific results from this startup period include initial simulations of the

  14. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry

    2017-02-27

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey\\'s acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less

  15. Hurricane Modeling and Supercomputing: Can a global mesoscale model be useful in improving forecasts of tropical cyclogenesis?

    Science.gov (United States)

    Shen, B.; Tao, W.; Atlas, R.

    2007-12-01

    Hurricane modeling, along with guidance from observations, has been used to help construct hurricane theories since the 1960s. CISK (conditional instability of the second kind, Charney and Eliassen 1964; Ooyama 1964,1969) and WISHE (wind-induced surface heat exchange, Emanuel 1986) are among the well-known theories being used to understand hurricane intensification. For hurricane genesis, observations have indicated the importance of large-scale flows (e.g., the Madden-Julian Oscillation or MJO, Maloney and Hartmann, 2000) on the modulation of hurricane activity. Recent modeling studies have focused on the role of the MJO and Rossby waves (e.g., Ferreira and Schubert, 1996; Aivyer and Molinari, 2003) and/or the interaction of small-scale vortices (e.g., Holland 1995; Simpson et al. 1997; Hendrick et al. 2004), of which determinism could be also built by large-scale flows. The aforementioned studies suggest a unified view on hurricane formation, consisting of multiscale processes such as scale transition (e.g., from the MJO to Equatorial Rossby Waves and from waves to vortices), and scale interactions among vortices, convection, and surface heat and moisture fluxes. To depict the processes in the unified view, a high-resolution global model is needed. During the past several years, supercomputers have enabled the deployment of ultra-high resolution global models, obtaining remarkable forecasts of hurricane track and intensity (Atlas et al. 2005; Shen et al. 2006). In this work, hurricane genesis is investigated with the aid of a global mesoscale model on the NASA Columbia supercomputer by conducting numerical experiments on the genesis of six consecutive tropical cyclones (TCs) in May 2002. These TCs include two pairs of twin TCs in the Indian Ocean, Supertyphoon Hagibis in the West Pacific Ocean and Hurricane Alma in the East Pacific Ocean. It is found that the model is capable of predicting the genesis of five of these TCs about two to three days in advance. Our

  16. Science Driven Supercomputing Architectures: AnalyzingArchitectural Bottlenecks with Applications and Benchmark Probes

    Energy Technology Data Exchange (ETDEWEB)

    Kamil, S.; Yelick, K.; Kramer, W.T.; Oliker, L.; Shalf, J.; Shan,H.; Strohmaier, E.

    2005-09-26

    There is a growing gap between the peak speed of parallel computing systems and the actual delivered performance for scientific applications. In general this gap is caused by inadequate architectural support for the requirements of modern scientific applications, as commercial applications and the much larger market they represent, have driven the evolution of computer architectures. This gap has raised the importance of developing better benchmarking methodologies to characterize and to understand the performance requirements of scientific applications, to communicate them efficiently to influence the design of future computer architectures. This improved understanding of the performance behavior of scientific applications will allow improved performance predictions, development of adequate benchmarks for identification of hardware and application features that work well or poorly together, and a more systematic performance evaluation in procurement situations. The Berkeley Institute for Performance Studies has developed a three-level approach to evaluating the design of high end machines and the software that runs on them: (1) A suite of representative applications; (2) A set of application kernels; and (3) Benchmarks to measure key system parameters. The three levels yield different type of information, all of which are useful in evaluating systems, and enable NSF and DOE centers to select computer architectures more suited for scientific applications. The analysis will further allow the centers to engage vendors in discussion of strategies to alleviate the present architectural bottlenecks using quantitative information. These may include small hardware changes or larger ones that may be out interest to non-scientific workloads. Providing quantitative models to the vendors allows them to assess the benefits of technology alternatives using their own internal cost-models in the broader marketplace, ideally facilitating the development of future computer

  17. Science Driven Supercomputing Architectures: AnalyzingArchitectural Bottlenecks with Applications and Benchmark Probes

    Energy Technology Data Exchange (ETDEWEB)

    Kamil, S.; Yelick, K.; Kramer, W.T.; Oliker, L.; Shalf, J.; Shan,H.; Strohmaier, E.

    2005-09-26

    There is a growing gap between the peak speed of parallelcomputing systems and the actual delivered performance for scientificapplications. In general this gap is caused by inadequate architecturalsupport for the requirements of modern scientific applications, ascommercial applications and the much larger market they represent, havedriven the evolution of computer architectures. This gap has raised theimportance of developing better benchmarking methodologies tocharacterize and to understand the performance requirements of scientificapplications, to communicate them efficiently to influence the design offuture computer architectures. This improved understanding of theperformance behavior of scientific applications will allow improvedperformance predictions, development of adequate benchmarks foridentification of hardware and application features that work well orpoorly together, and a more systematic performance evaluation inprocurement situations. The Berkeley Institute for Performance Studieshas developed a three-level approach to evaluating the design of high endmachines and the software that runs on them: 1) A suite of representativeapplications; 2) A set of application kernels; and 3) Benchmarks tomeasure key system parameters. The three levels yield different type ofinformation, all of which are useful in evaluating systems, and enableNSF and DOE centers to select computer architectures more suited forscientific applications. The analysis will further allow the centers toengage vendors in discussion of strategies to alleviate the presentarchitectural bottlenecks using quantitative information. These mayinclude small hardware changes or larger ones that may be out interest tonon-scientific workloads. Providing quantitative models to the vendorsallows them to assess the benefits of technology alternatives using theirown internal cost-models in the broader marketplace, ideally facilitatingthe development of future computer architectures more suited forscientific

  18. An efficient highly parallel implementation of a large air pollution model on an IBM blue gene supercomputer

    Science.gov (United States)

    Ostromsky, Tz.; Georgiev, K.; Zlatev, Z.

    2012-10-01

    In this paper we discuss the efficient distributed-memory parallelization strategy of the Unified Danish Eulerian Model (UNI-DEM). We apply an improved decomposition strategy to the spatial domain in order to get more parallel tasks (based on the larger number of subdomains) with less communications between them (due to optimization of the overlapping area when the advection-diffusion problem is solved numerically). This kind of rectangular block partitioning (with a squareshape trend) allows us not only to increase significantly the number of potential parallel tasks, but also to reduce the local memory requirements per task, which is critical for the distributed-memory implementation of the higher-resolution/finergrid versions of UNI-DEM on some parallel systems, and particularly on the IBM BlueGene/P platform - our target hardware. We will show by experiments that our new parallel implementation can use rather efficiently the resources of the powerful IBM BlueGene/P supercomputer, the largest in Bulgaria, up to its full capacity. It turned out to be extremely useful in the large and computationally expensive numerical experiments, carried out to calculate some initial data for sensitivity analysis of the Danish Eulerian model.

  19. The ASCI Network for SC '99: A Step on the Path to a 100 Gigabit Per Second Supercomputing Network

    Energy Technology Data Exchange (ETDEWEB)

    PRATT,THOMAS J.; TARMAN,THOMAS D.; MARTINEZ,LUIS M.; MILLER,MARC M.; ADAMS,ROGER L.; CHEN,HELEN Y.; BRANDT,JAMES M.; WYCKOFF,PETER S.

    2000-07-24

    This document highlights the Discom{sup 2}'s Distance computing and communication team activities at the 1999 Supercomputing conference in Portland, Oregon. This conference is sponsored by the IEEE and ACM. Sandia, Lawrence Livermore and Los Alamos National laboratories have participated in this conference for eleven years. For the last four years the three laboratories have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives rubric. Communication support for the ASCI exhibit is provided by the ASCI DISCOM{sup 2} project. The DISCOM{sup 2} communication team uses this forum to demonstrate and focus communication and networking developments within the community. At SC 99, DISCOM built a prototype of the next generation ASCI network demonstrated remote clustering techniques, demonstrated the capabilities of the emerging Terabit Routers products, demonstrated the latest technologies for delivering visualization data to the scientific users, and demonstrated the latest in encryption methods including IP VPN technologies and ATM encryption research. The authors also coordinated the other production networking activities within the booth and between their demonstration partners on the exhibit floor. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia's overall strategies in ASCI networking.

  20. Combining density functional theory calculations, supercomputing, and data-driven methods to design new materials (Conference Presentation)

    Science.gov (United States)

    Jain, Anubhav

    2017-04-01

    Density functional theory (DFT) simulations solve for the electronic structure of materials starting from the Schrödinger equation. Many case studies have now demonstrated that researchers can often use DFT to design new compounds in the computer (e.g., for batteries, catalysts, and hydrogen storage) before synthesis and characterization in the lab. In this talk, I will focus on how DFT calculations can be executed on large supercomputing resources in order to generate very large data sets on new materials for functional applications. First, I will briefly describe the Materials Project, an effort at LBNL that has virtually characterized over 60,000 materials using DFT and has shared the results with over 17,000 registered users. Next, I will talk about how such data can help discover new materials, describing how preliminary computational screening led to the identification and confirmation of a new family of bulk AMX2 thermoelectric compounds with measured zT reaching 0.8. I will outline future plans for how such data-driven methods can be used to better understand the factors that control thermoelectric behavior, e.g., for the rational design of electronic band structures, in ways that are different from conventional approaches.

  1. BLAS (Basic Linear Algebra Subroutines), linear algebra modules, and supercomputers. Technical report for period ending 15 December 1984

    Energy Technology Data Exchange (ETDEWEB)

    Rice, J.R.

    1984-12-31

    On October 29 and 30, 1984 about 20 people met at Purdue University to consider extensions to the Basic Linear Algebra Subroutines (BLAS) and linear algebra software modules in general. The need for these extensions and new sets of modules is largely due to the advent of new supercomputer architectures which make it difficult for ordinary coding techniques to achieve even a significant fraction of the potential computing power. The workshop format was one of informal presentations with ample discussions followed by sessions of general discussions of the issues raised. This report is a summary of the presentations, the issues raised, the conclusions reached and the open issue discussions. Each participant had an opportunity to comment on this report, but it also clearly reflects the author's filtering of the extensive discussions. Section 2 describes seven proposals for linear algebra software modules and Section 3 describes four presentations on the use of such modules. Discussion summaries are given next; Section 4 for those where near concensus was reached and Section 5 where the issues were left open.

  2. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-03-29

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  3. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    Science.gov (United States)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  4. Report for CS 698-95 ?Directed Research ? Performance Modeling:? Using Queueing Network Modeling to Analyze the University of San Francisco Keck Cluster Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, M L

    2005-09-28

    In today's world, the need for computing power is becoming more pressing daily. Our need to process, analyze, and store data is quickly exceeding the capabilities of small self-contained serial machines, such as the modern desktop PC. Initially, this gap was filled by the creation of supercomputers: large-scale self-contained parallel machines. However, current markets, as well as the costs to develop and maintain such machines, are quickly making such machines a rarity, used only in highly specialized environments. A third type of machine exists, however. This relatively new type of machine, known as a cluster, is built from common, and often inexpensive, commodity self-contained desktop machines. But how well do these clustered machines work? There have been many attempts to quantify the performance of clustered computers. One approach, Queueing Network Modeling (QNM), appears to be a potentially useful and rarely tried method of modeling such systems. QNM, which has its beginnings in the modeling of traffic patterns, has expanded, and is now used to model everything from CPU and disk services, to computer systems, to service rates in store checkout lines. This history of successful usage, as well as the correspondence of QNM components to commodity clusters, suggests that QNM can be a useful tool for both the cluster designer, interested in the best value for the cost, and the user of existing machines, interested in performance rates and time-to-solution. So, what is QNM? Queueing Network Modeling is an approach to computer system modeling where the computer is represented as a network of queues and evaluated analytically. How does this correspond to clusters? There is a neat one-to-one relationship between the components of a QNM model and a cluster. For example: A cluster is made from a combination of computational nodes and network switches. Both of these fit nicely with the QNM descriptions of service centers (delay, queueing, and load

  5. Distribution center

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    Distribution center is a logistics link fulfill physical distribution as its main functionGenerally speaking, it's a large and hiahly automated center destined to receive goods from various plants and suppliers,take orders,fill them efficiently,and deliver goods to customers as quickly as possible.

  6. Large-scale Particle Simulations for Debris Flows using Dynamic Load Balance on a GPU-rich Supercomputer

    Science.gov (United States)

    Tsuzuki, Satori; Aoki, Takayuki

    2016-04-01

    Numerical simulations for debris flows including a countless of objects is one of important topics in fluid dynamics and many engineering applications. Particle-based method is a promising approach to carry out the simulations for flows interacting with objects. In this paper, we propose an efficient method to realize a large-scale simulation for fluid-structure interaction by combining SPH (Smoothed Particle Hydrodynamics) method for fluid with DEM (Discrete Element Method) for objects on a multi-GPU system. By applying space filling curves to decomposition of the computational domain, we are able to contain the same number of particles in each decomposed domain. In our implementation, several techniques for particle counting and data movement have been introduced. Fragmentation of the memory used for particles happens during the time-integration and the frequency of de-fragmentation is examined by taking account for computational load balance and the communication cost between CPU and GPU. A link-list technique of the particle interaction is introduced to save the memory drastically. It is found that the sorting of particle data for the neighboring particle list using linked-list method improves the memory access greatly with a certain interval. The weak and strong scalabilities for a SPH simulation using 111 Million particles was measured from 4 GPUs to 512 GPUs for three types of space filling curves. A large-scale debris flow simulation of tsunami with 10,368 floating rubbles using 117 Million particles were successfully carried out with 256 GPUs on the TSUBAME 2.5 supercomputer at Tokyo Institute of Technology.

  7. Statistical correlations and risk analyses techniques for a diving dual phase bubble model and data bank using massively parallel supercomputers.

    Science.gov (United States)

    Wienke, B R; O'Leary, T R

    2008-05-01

    Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.

  8. New Mexico High School Supercomputing Challenge, 1990--1995: Five years of making a difference to students, teachers, schools, and communities. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Foster, M.; Kratzer, D.

    1996-02-01

    The New Mexico High School Supercomputing Challenge is an academic program dedicated to increasing interest in science and math among high school students by introducing them to high performance computing. This report provides a summary and evaluation of the first five years of the program, describes the program and shows the impact that it has had on high school students, their teachers, and their communities. Goals and objectives are reviewed and evaluated, growth and development of the program are analyzed, and future directions are discussed.

  9. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  10. Mass Storage System Upgrades at the NASA Center for Computational Sciences

    Science.gov (United States)

    Tarshish, Adina; Salmon, Ellen; Macie, Medora; Saletta, Marty

    2000-01-01

    The NASA Center for Computational Sciences (NCCS) provides supercomputing and mass storage services to over 1200 Earth and space scientists. During the past two years, the mass storage system at the NCCS went through a great deal of changes both major and minor. Tape drives, silo control software, and the mass storage software itself were upgraded, and the mass storage platform was upgraded twice. Some of these upgrades were aimed at achieving year-2000 compliance, while others were simply upgrades to newer and better technologies. In this paper we will describe these upgrades.

  11. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  12. Super-computer architecture

    CERN Document Server

    Hockney, R W

    1977-01-01

    This paper examines the design of the top-of-the-range, scientific, number-crunching computers. The market for such computers is not as large as that for smaller machines, but on the other hand it is by no means negligible. The present work-horse machines in this category are the CDC 7600 and IBM 360/195, and over fifty of the former machines have been sold. The types of installation that form the market for such machines are not only the major scientific research laboratories in the major countries-such as Los Alamos, CERN, Rutherford laboratory-but also major universities or university networks. It is also true that, as with sports cars, innovations made to satisfy the top of the market today often become the standard for the medium-scale computer of tomorrow. Hence there is considerable interest in examining present developments in this area. (0 refs).

  13. Supercomputer debugging workshop `92

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.S.

    1993-02-01

    This report contains papers or viewgraphs on the following topics: The ABCs of Debugging in the 1990s; Cray Computer Corporation; Thinking Machines Corporation; Cray Research, Incorporated; Sun Microsystems, Inc; Kendall Square Research; The Effects of Register Allocation and Instruction Scheduling on Symbolic Debugging; Debugging Optimized Code: Currency Determination with Data Flow; A Debugging Tool for Parallel and Distributed Programs; Analyzing Traces of Parallel Programs Containing Semaphore Synchronization; Compile-time Support for Efficient Data Race Detection in Shared-Memory Parallel Programs; Direct Manipulation Techniques for Parallel Debuggers; Transparent Observation of XENOOPS Objects; A Parallel Software Monitor for Debugging and Performance Tools on Distributed Memory Multicomputers; Profiling Performance of Inter-Processor Communications in an iWarp Torus; The Application of Code Instrumentation Technology in the Los Alamos Debugger; and CXdb: The Road to Remote Debugging.

  14. Associative Memories for Supercomputers

    Science.gov (United States)

    1992-12-01

    Transform (FFT) is computed. The real part is extracted and a bias equal to its minimum is added to it in order to make all the values positive. Each...Transform (FM) is computed. The real part is extracted and a bias equal to its minimum is added to it in order to make all the values positive. Each...masque numero un de Figure 12: Photographic de Ia reconstruction obtenuc avec Ia plaquc IOCDL correspondant k Ia phase binaire. en rotition, montrant

  15. Power-constrained supercomputing

    Science.gov (United States)

    Bailey, Peter E.

    As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound. Adaptive power balancing efficiently predicts where critical paths are likely to occur and distributes power to those paths. Greater power, in turn, allows increased thread concurrency levels, CPU frequency/voltage, or both. We describe these techniques in detail and show that, compared to the state-of-the-art technique of using statically predetermined, per-node power caps, Conductor leads to a best-case performance improvement of up to 30%, and an average improvement of 19.1%. At the node level, an accurate power/performance model will aid in selecting the right configuration from a large set of available configurations. We present a novel approach to generate such a model offline using kernel clustering and multivariate linear regression. Our model requires only two iterations to select a configuration, which provides a significant advantage over exhaustive search-based strategies. We apply our model to predict power and performance for different applications using arbitrary configurations, and show that our model, when used with hardware frequency-limiting in a runtime system, selects configurations with significantly higher performance at a given power limit than those chosen by frequency-limiting alone. When applied to a set of 36 computational kernels from a range of applications, our model accurately predicts power and performance; our runtime system based on the model maintains 91% of optimal performance while meeting power constraints 88% of the time. When the runtime system violates a power constraint, it exceeds the constraint by only 6% in the average case, while simultaneously achieving 54% more performance than an oracle. Through the combination of the above contributions, we hope to provide guidance and inspiration to research practitioners working on runtime systems for power-constrained environments. We also hope this dissertation will draw attention to the need for software and runtime-controlled power management under power constraints at various levels, from the processor level to the cluster level.

  16. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu

    2013-07-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  17. Development of 3D interactive visual objects using the Scripps Institution of Oceanography's Visualization Center

    Science.gov (United States)

    Kilb, D.; Reif, C.; Peach, C.; Keen, C. S.; Smith, B.; Mellors, R. J.

    2003-12-01

    Within the last year scientists and educators at the Scripps Institution of Oceanography (SIO), the Birch Aquarium at Scripps and San Diego State University have collaborated with education specialists to develop 3D interactive graphic teaching modules for use in the classroom and in teacher workshops at the SIO Visualization center (http://siovizcenter.ucsd.edu). The unique aspect of the SIO Visualization center is that the center is designed around a 120 degree curved Panoram floor-to-ceiling screen (8'6" by 28'4") that immerses viewers in a virtual environment. The center is powered by an SGI 3400 Onyx computer that is more powerful, by an order of magnitude in both speed and memory, than typical base systems currently used for education and outreach presentations. This technology allows us to display multiple 3D data layers (e.g., seismicity, high resolution topography, seismic reflectivity, draped interferometric synthetic aperture radar (InSAR) images, etc.) simultaneously, render them in 3D stereo, and take a virtual flight through the data as dictated on the spot by the user. This system can also render snapshots, images and movies that are too big for other systems, and then export smaller size end-products to more commonly used computer systems. Since early 2002, we have explored various ways to provide informal education and outreach focusing on current research presented directly by the researchers doing the work. The Center currently provides a centerpiece for instruction on southern California seismology for K-12 students and teachers for various Scripps education endeavors. Future plans are in place to use the Visualization Center at Scripps for extended K-12 and college educational programs. In particular, we will be identifying K-12 curriculum needs, assisting with teacher education, developing assessments of our programs and products, producing web-accessible teaching modules and facilitating the development of appropriate teaching tools to be

  18. A Result Data Offloading Service for HPC Centers

    Energy Technology Data Exchange (ETDEWEB)

    Monti, Henri [Virginia Polytechnic Institute and State University (Virginia Tech); Butt, Ali R [Virginia Polytechnic Institute and State University (Virginia Tech); Vazhkudai, Sudharshan S [ORNL

    2007-01-01

    Modern High-Performance Computing applications are consuming and producing an exponentially increasing amount of data. This increase has lead to a significant number of resources being dedicated to data staging in and out of Supercomputing Centers. The typical approach to staging is a direct transfer of application data between the center and the application submission site. Such a direct data transfer approach becomes problematic, especially for staging-out, as (i) the data transfer time increases with the size of data, and may exceed the time allowed by the center's purge policies; and (ii) the submission site may not be online to receive the data, thus further increasing the chances for output data to be purged. In this paper, we argue for a systematic data staging-out approach that utilizes intermediary data-holding nodes to quickly offload data from the center to the intermediaries, thus avoiding the peril of a purge and addressing the two issues mentioned above. The intermediary nodes provide temporary data storage for the staged-out data and maximize the offload bandwidth by providing multiple data-flow paths from the center to the submission site. Our initial investigation shows such a technique to be effective in addressing the above two issues and providing better QOS guarantees for data retrieval.

  19. Centering research

    DEFF Research Database (Denmark)

    Katan, Lina Hauge; Baarts, Charlotte

    and collected 24 portfolios in which students reflect auto-ethnographically on their educational practices. Analyzing this qualitative material, we explore how researchers and students respectively read and write to develop and advance their thinking in those learning processes that the two groups fundamentally...... share as the common aim of both research and education. Despite some similarities, we find that how the two groups engage in and benefit from reading and writing diverges significantly. Thus we have even more reason to believe that centering practice-based teaching on these aspects of research is a good...

  20. The center for causal discovery of biomedical knowledge from big data.

    Science.gov (United States)

    Cooper, Gregory F; Bahar, Ivet; Becich, Michael J; Benos, Panayiotis V; Berg, Jeremy; Espino, Jeremy U; Glymour, Clark; Jacobson, Rebecca Crowley; Kienholz, Michelle; Lee, Adrian V; Lu, Xinghua; Scheines, Richard

    2015-11-01

    The Big Data to Knowledge (BD2K) Center for Causal Discovery is developing and disseminating an integrated set of open source tools that support causal modeling and discovery of biomedical knowledge from large and complex biomedical datasets. The Center integrates teams of biomedical and data scientists focused on the refinement of existing and the development of new constraint-based and Bayesian algorithms based on causal Bayesian networks, the optimization of software for efficient operation in a supercomputing environment, and the testing of algorithms and software developed using real data from 3 representative driving biomedical projects: cancer driver mutations, lung disease, and the functional connectome of the human brain. Associated training activities provide both biomedical and data scientists with the knowledge and skills needed to apply and extend these tools. Collaborative activities with the BD2K Consortium further advance causal discovery tools and integrate tools and resources developed by other centers.

  1. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    Science.gov (United States)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed

  2. Use of QUADRICS supercomputer as embedded simulator in emergency management systems; Utilizzo del calcolatore QUADRICS come simulatore in linea in un sistema di gestione delle emergenze

    Energy Technology Data Exchange (ETDEWEB)

    Bove, R.; Di Costanzo, G.; Ziparo, A. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dip. Energia

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system.

  3. A Reliability Calculation Method for Web Service Composition Using Fuzzy Reasoning Colored Petri Nets and Its Application on Supercomputing Cloud Platform

    Directory of Open Access Journals (Sweden)

    Ziyun Deng

    2016-09-01

    Full Text Available In order to develop a Supercomputing Cloud Platform (SCP prototype system using Service-Oriented Architecture (SOA and Petri nets, we researched some technologies for Web service composition. Specifically, in this paper, we propose a reliability calculation method for Web service compositions, which uses Fuzzy Reasoning Colored Petri Net (FRCPN to verify the Web service compositions. We put forward a definition of semantic threshold similarity for Web services and a formal definition of FRCPN. We analyzed five kinds of production rules in FRCPN, and applied our method to the SCP prototype. We obtained the reliability value of the end Web service as an indicator of the overall reliability of the FRCPN. The method can test the activity of FRCPN. Experimental results show that the reliability of the Web service composition has a correlation with the number of Web services and the range of reliability transition values.

  4. Enabling Loosely-Coupled Serial Job Execution on the IBM BlueGene/P Supercomputer and the SiCortex SC5832

    CERN Document Server

    Raicu, Ioan; Wilde, Mike; Foster, Ian

    2008-01-01

    Our work addresses the enabling of the execution of highly parallel computations composed of loosely coupled serial jobs with no modifications to the respective applications, on large-scale systems. This approach allows new-and potentially far larger-classes of application to leverage systems such as the IBM Blue Gene/P supercomputer and similar emerging petascale architectures. We present here the challenges of I/O performance encountered in making this model practical, and show results using both micro-benchmarks and real applications on two large-scale systems, the BG/P and the SiCortex SC5832. Our preliminary benchmarks show that we can scale to 4096 processors on the Blue Gene/P and 5832 processors on the SiCortex with high efficiency, and can achieve thousands of tasks/sec sustained execution rates for parallel workloads of ordinary serial applications. We measured applications from two domains, economic energy modeling and molecular dynamics.

  5. The GLOBE-Consortium: The Erasmus Computing Grid – Building a Super-Computer at Erasmus MC for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    2005-01-01

    textabstractTo meet the enormous computational needs of live-science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing grids in the world – The Erasmus Computing Grid.

  6. The GLOBE-Consortium: The Erasmus Computing Grid – Building a Super-Computer at Erasmus MC for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    2005-01-01

    textabstractTo meet the enormous computational needs of live-science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing grids in the world – The Erasmus Computing Grid. Curren

  7. IPS Space Weather Research: Korea-Japan-UCSD

    Science.gov (United States)

    2015-04-27

    systems. c) The potential for using heliospheric Faraday rotation for measuring Bz remotely. To make headway on these items in the allotted time...workshop where this information will be presented. To make headway on the analysis of IPS g-level data as a proxy for heliospheric density, we wish

  8. CZT Detector and HXI Development at CASS/UCSD

    Science.gov (United States)

    Rothschild, Richard E.; Tomsick, John A.; Matteson, James L.; Pelling, Michael R.; Suchy, Slawomir

    2006-06-01

    The scientific goals and concept design of the Hard X-ray Imager (HXI) for MIRAX are presented to set the context for a discussion of the status of the HXI development. Emphasis is placed upon the RENA ASIC performance, the detector module upgrades, and a planned high altitude balloon flight to validate the HXI design and performance in a near-space environment.

  9. Mississippi Technology Transfer Center

    Science.gov (United States)

    1987-01-01

    The Mississippi Technology Transfer Center at the John C. Stennis Space Center in Hancock County, Miss., was officially dedicated in 1987. The center is home to several state agencies as well as the Center For Higher Learning.

  10. PRODEEDINGS OF RIKEN BNL RESEARCH CENTER WORKSHOP : HIGH PERFORMANCE COMPUTING WITH QCDOC AND BLUEGENE.

    Energy Technology Data Exchange (ETDEWEB)

    CHRIST,N.; DAVENPORT,J.; DENG,Y.; GARA,A.; GLIMM,J.; MAWHINNEY,R.; MCFADDEN,E.; PESKIN,A.; PULLEYBLANK,W.

    2003-03-11

    Staff of Brookhaven National Laboratory, Columbia University, IBM and the RIKEN BNL Research Center organized a one-day workshop held on February 28, 2003 at Brookhaven to promote the following goals: (1) To explore areas other than QCD applications where the QCDOC and BlueGene/L machines can be applied to good advantage, (2) To identify areas where collaboration among the sponsoring institutions can be fruitful, and (3) To expose scientists to the emerging software architecture. This workshop grew out of an informal visit last fall by BNL staff to the IBM Thomas J. Watson Research Center that resulted in a continuing dialog among participants on issues common to these two related supercomputers. The workshop was divided into three sessions, addressing the hardware and software status of each system, prospective applications, and future directions.

  11. Children's cancer centers

    Science.gov (United States)

    Pediatric cancer center; Pediatric oncology center; Comprehensive cancer center ... Treating childhood cancer is not the same as treating adult cancer. The cancers are different. So are the treatments and the ...

  12. Transplant Center Search Form

    Science.gov (United States)

    ... Share Your Story Give Us Feedback - A + A Transplant Center Search Form Welcome to the Blood & Marrow ... transplant centers for patients with a particular disease. Transplant Center login Username: * Password: * Request new password Join ...

  13. The Watergate Learning Center

    Science.gov (United States)

    Training in Business and Industry, 1971

    1971-01-01

    The Watergate Learning Center, recently opened by Sterling Learning Center in Washington, D. C., blueprints the plan established by Sterling and Marriott Hotels for a national chain of learning centers with much the same facilities. (EB)

  14. The Barcelona Dust Forecast Center: The first WMO regional meteorological center specialized on atmospheric sand and dust forecast

    Science.gov (United States)

    Basart, Sara; Terradellas, Enric; Cuevas, Emilio; Jorba, Oriol; Benincasa, Francesco; Baldasano, Jose M.

    2015-04-01

    The World Meteorological Organization's Sand and Dust Storm Warning Advisory and Assessment System (WMO SDS-WAS, http://sds-was.aemet.es/) project has the mission to enhance the ability of countries to deliver timely and quality sand and dust storm forecasts, observations, information and knowledge to users through an international partnership of research and operational communities. The good results obtained by the SDS-WAS Northern Africa, Middle East and Europe (NAMEE) Regional Center and the demand of many national meteorological services led to the deployment of operational dust forecast services. On June 2014, the first WMO Regional Meteorological Center Specialized on Atmospheric Sand and Dust Forecast, the Barcelona Dust Forecast Center (BDFC; http://dust.aemet.es/), was publicly presented. The Center operationally generates and distributes predictions for the NAMEE region. The dust forecasts are based on the NMMB/BSC-Dust model developed at the Barcelona Supercomputing Center (BSC-CNS). The present contribution will describe the main objectives and capabilities of BDFC. One of the activities performed by the BDFC is to establish a protocol to routinely exchange products from dust forecast models as dust load, dust optical depth (AOD), surface concentration, surface extinction and deposition. An important step in dust forecasting is the evaluation of the results that have been generated. This process consists of the comparison of the model results with multiple kinds of observations (i.e. AERONET and MODIS) and is aimed to facilitate the understanding of the model capabilities, limitations, and appropriateness for the purpose for which it was designed. The aim of this work is to present different evaluation approaches and to test the use of different observational products in the evaluation system.

  15. Southern California Particle Center

    Data.gov (United States)

    Federal Laboratory Consortium — At the Southern California Particle Center, center researchers will investigate the underlying mechanisms that produce the health effects associated with exposure to...

  16. Womens Business Center

    Data.gov (United States)

    Small Business Administration — Women's Business Centers (WBCs) represent a national network of nearly 100 educational centers throughout the United States and its territories, which are designed...

  17. Taiyuan Satellite Launch Center

    Institute of Scientific and Technical Information of China (English)

    LiuJie

    2004-01-01

    There are three major space launch bases in China, the Jiuquan Satellite Launch Center,the Taiyuan Satellite Launch Center and the Xichang Satellite Launch Center. All the three launch centers are located in sparsely populated areas where the terrain is even and the field of vision is broad. Security, transport conditions and the influence of the axial rotation

  18. Student Success Center Toolkit

    Science.gov (United States)

    Jobs For the Future, 2014

    2014-01-01

    "Student Success Center Toolkit" is a compilation of materials organized to assist Student Success Center directors as they staff, launch, operate, and sustain Centers. The toolkit features materials created and used by existing Centers, such as staffing and budgeting templates, launch materials, sample meeting agendas, and fundraising…

  19. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  20. Northeast Parallel Architectures Center (NPAC)

    Science.gov (United States)

    1992-07-01

    34Supercomputing 󈨝, Reno, Nevada, November 1898. Mohamed, A.G., Valentine, D.T. and Hessel , R.E., Users Guide to Some Useful Mathematical Software. Report Number...Stein, Visiting Researcher, to SCCS/NPAC Meeting, January 22, 1991. SCCS 46: Mohamed, A. Gaber, Valentine, Daniel T., and Hessel , Richard E., "Numerical...22202 Los Alamos National Laboratory 1 Report Library MS 5300 Los Alamos NM 87544 AEDC Library I Tech Filtes•MS-lO0 Arnold AFB TN 37389 Commancer

  1. Poison Control Centers

    Science.gov (United States)

    ... 1222 immediately. Name State American Association of Poison Control Centers Address AAPCC Central Office NOT A POISON ... not for emergency use. Arkansas ASPCA Animal Poison Control Center Address 1717 S. Philo Road, Suite 36 Urbana, ...

  2. RSW Cell Centered Grids

    Data.gov (United States)

    National Aeronautics and Space Administration — New cell centered grids are generated to complement the node-centered ones uploaded. Six tarballs containing the coarse, medium, and fine mixed-element and pure tet....

  3. MARYLAND ROBOTICS CENTER

    Data.gov (United States)

    Federal Laboratory Consortium — The Maryland Robotics Center is an interdisciplinary research center housed in the Institute for Systems Research (link is external) within the A. James Clark School...

  4. NIH Clinical Centers

    Data.gov (United States)

    Federal Laboratory Consortium — The NIH Clinical Center consists of two main facilities: The Mark O. Hatfield Clinical Research Center, which opened in 2005, houses inpatient units, day hospitals,...

  5. Automating the Media Center.

    Science.gov (United States)

    Holloway, Mary A.

    1988-01-01

    Discusses the need to develop more efficient information retrieval skills by the use of new technology. Lists four stages used in automating the media center. Describes North Carolina's pilot programs. Proposes benefits and looks at the media center's future. (MVL)

  6. National Rehabilitation Information Center

    Science.gov (United States)

    ... including News and Notes) Welcome to the National Rehabilitation Information Center! We are conducting improvements to the ... experience. We apologize for any inconvenience The National Rehabilitation Information Center ( NARIC ) is the library of the ...

  7. Day Care Centers

    Data.gov (United States)

    Department of Homeland Security — This database contains locations of day care centers for 50 states and Washington D.C. and Puerto Rico. The dataset only includes center based day care locations...

  8. Center for Functional Nanomaterials

    Data.gov (United States)

    Federal Laboratory Consortium — The Center for Functional Nanomaterials (CFN) explores the unique properties of materials and processes at the nanoscale. The CFN is a user-oriented research center...

  9. Hydrologic Engineering Center

    Data.gov (United States)

    Federal Laboratory Consortium — The Hydrologic Engineering Center (HEC), an organization within the Institute for Water Resources, is the designated Center of Expertise for the U.S. Army Corps of...

  10. Hydrologic Engineering Center

    Data.gov (United States)

    Federal Laboratory Consortium — The Hydrologic Engineering Center (HEC), an organization within the Institute for Water Resources, is the designated Center of Expertise for the U.S. Army Corps of...

  11. BKG Data Center

    Science.gov (United States)

    Thorandt, Volkmar; Wojdziak, Reiner

    2013-01-01

    This report summarizes the activities and background information of the IVS Data Center for the year 2012. Included is information about functions, structure, technical equipment, and staff members of the BKG Data Center.

  12. Find a Health Center

    Data.gov (United States)

    U.S. Department of Health & Human Services — HRSA Health Centers care for you, even if you have no health insurance – you pay what you can afford based on your income. Health centers provide services that...

  13. Center of Attention.

    Science.gov (United States)

    Coffey, J. Steven; Wood-Steed, Ruth

    2001-01-01

    Illustrates how college and university student centers are becoming the institution's marketing tools. Explores how the Millennium Center at the University of Missouri in St. Louis exemplifies this new trend. (GR)

  14. The Comprehensive Learning Center

    Science.gov (United States)

    Peterson, Gary T.

    1975-01-01

    This paper describes the results of a study of community college learning resource centers as they exist today and examines some emerging functions which point toward the role of the center in the future. (DC)

  15. Center for Women Veterans

    Science.gov (United States)

    ... Aid & Attendance & Housebound Caregivers Community Living Centers (CLC) Community Nursing Homes Domiciliaries (Please contact your local VA Medical Center) Homemaker & Home Health Aid Care Hospice and Palliative Care State Veterans ...

  16. Center for Functional Nanomaterials

    Data.gov (United States)

    Federal Laboratory Consortium — The Center for Functional Nanomaterials (CFN) explores the unique properties of materials and processes at the nanoscale. The CFN is a user-oriented research center...

  17. MARYLAND ROBOTICS CENTER

    Data.gov (United States)

    Federal Laboratory Consortium — The Maryland Robotics Center is an interdisciplinary research center housed in the Institute for Systems Research (link is external)within the A. James Clark School...

  18. The Computational Physics Program of the national MFE Computer Center

    Energy Technology Data Exchange (ETDEWEB)

    Mirin, A.A.

    1989-01-01

    Since June 1974, the MFE Computer Center has been engaged in a significant computational physics effort. The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generations of supercomputers. The Computational Physics Group has been involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to tokamaks and compact toroids. A third area is the investigation of kinetic instabilities using a 3-D particle code; this work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence have been under examination, with the hope of being able to explain anomalous transport. Also, we are collaborating in an international effort to evaluate fully three-dimensional linear stability of toroidal devices. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers. A summary of these programs are included in this paper. 6 tabs.

  19. Implementation and scaling of the fully coupled Terrestrial Systems Modeling Platform (TerrSysMP v1.0) in a massively parallel supercomputing environment - a case study on JUQUEEN (IBM Blue Gene/Q)

    Science.gov (United States)

    Gasper, F.; Goergen, K.; Shrestha, P.; Sulis, M.; Rihani, J.; Geimer, M.; Kollet, S.

    2014-10-01

    Continental-scale hyper-resolution simulations constitute a grand challenge in characterizing nonlinear feedbacks of states and fluxes of the coupled water, energy, and biogeochemical cycles of terrestrial systems. Tackling this challenge requires advanced coupling and supercomputing technologies for earth system models that are discussed in this study, utilizing the example of the implementation of the newly developed Terrestrial Systems Modeling Platform (TerrSysMP v1.0) on JUQUEEN (IBM Blue Gene/Q) of the Jülich Supercomputing Centre, Germany. The applied coupling strategies rely on the Multiple Program Multiple Data (MPMD) paradigm using the OASIS suite of external couplers, and require memory and load balancing considerations in the exchange of the coupling fields between different component models and the allocation of computational resources, respectively. Using the advanced profiling and tracing tool Scalasca to determine an optimum load balancing leads to a 19% speedup. In massively parallel supercomputer environments, the coupler OASIS-MCT is recommended, which resolves memory limitations that may be significant in case of very large computational domains and exchange fields as they occur in these specific test cases and in many applications in terrestrial research. However, model I/O and initialization in the petascale range still require major attention, as they constitute true big data challenges in light of future exascale computing resources. Based on a factor-two speedup due to compiler optimizations, a refactored coupling interface using OASIS-MCT and an optimum load balancing, the problem size in a weak scaling study can be increased by a factor of 64 from 512 to 32 768 processes while maintaining parallel efficiencies above 80% for the component models.

  20. Exposing the Data Center

    OpenAIRE

    Sergejev, Ivan

    2014-01-01

    Given the rapid growth in the importance of the Internet, data centers - the buildings that store information on the web - are quickly becoming the most critical infrastructural objects in the world. However, so far they have received very little, if any, architectural attention. This thesis proclaims data centers to be the 'churches' of the digital society and proposes a new type of a publicly accessible data center. The thesis starts with a brief overview of the history of data centers ...

  1. Data center cooling method

    Energy Technology Data Exchange (ETDEWEB)

    Chainer, Timothy J.; Dang, Hien P.; Parida, Pritish R.; Schultz, Mark D.; Sharma, Arun

    2015-08-11

    A method aspect for removing heat from a data center may use liquid coolant cooled without vapor compression refrigeration on a liquid cooled information technology equipment rack. The method may also include regulating liquid coolant flow to the data center through a range of liquid coolant flow values with a controller-apparatus based upon information technology equipment temperature threshold of the data center.

  2. Intel 80860 or I860: The million transistor RISC microprocessor chip with supercomputer capability. April 1988-September 1989 (Citations from the Computer data base). Report for April 1988-September 1989

    Energy Technology Data Exchange (ETDEWEB)

    1989-10-01

    This bibliography contains citations concerning Intel's new microprocessor which has more than a million transistors and is capable of performing up to 80 million floating-point operations per second (80 mflops). The I860 (originally code named the N-10 during development) is to be used in workstation type applications. It will be suited for problems such as fluid dynamics, molecular modeling, structural analysis, and economic modeling which requires supercomputer number crunching and advanced graphics. (Contains 64 citations fully indexed and including a title list.)

  3. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Chuanfu, E-mail: xuchuanfu@nudt.edu.cn [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Deng, Xiaogang; Zhang, Lilun [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Fang, Jianbin [Parallel and Distributed Systems Group, Delft University of Technology, Delft 2628CD (Netherlands); Wang, Guangxue; Jiang, Yi [State Key Laboratory of Aerodynamics, P.O. Box 211, Mianyang 621000 (China); Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua [College of Computer Science, National University of Defense Technology, Changsha 410073 (China)

    2014-12-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  4. Timely Result-Data Offloading for Improved HPC Center Scratch Provisioning and Serviceability

    Energy Technology Data Exchange (ETDEWEB)

    Monti, Henri [Virginia Polytechnic Institute and State University (Virginia Tech); Butt, Ali R [Virginia Polytechnic Institute and State University (Virginia Tech); Vazhkudai, Sudharshan S [ORNL

    2011-01-01

    Modern High-Performance Computing (HPC) centers are facing a data deluge from emerging scientific applications. Supporting large data entails a significant commitment of the highthroughput center storage system, scratch space. However, the scratch space is typically managed using simple purge policies, without sophisticated end-user data services to balance resource consumption and user serviceability. End-user data services such as offloading are performed using point-to-point transfers that are unable to reconcile center s purge and users delivery deadlines, unable to adapt to changing dynamics in the end-toend data path and are not fault-tolerant. Such inefficiencies can be prohibitive to sustaining high performance. In this paper, we address the above issues by designing a framework for the timely, decentralized offload of application result data. Our framework uses an overlay of user-specified intermediate and landmark sites to orchestrate a decentralized fault-tolerant delivery. We have implemented our techniques within a production job scheduler (PBS) and data transfer tool (BitTorrent). Our evaluation using both a real implementation and supercomputer job log-driven simulations show that: the offloading times can be significantly reduced (90.4% for a 5 GB data transfer); the exposure window can be minimized while also meeting center-user Service Level Agreements.

  5. A call center primer.

    Science.gov (United States)

    Durr, W

    1998-01-01

    Call centers are strategically and tactically important to many industries, including the healthcare industry. Call centers play a key role in acquiring and retaining customers. The ability to deliver high-quality and timely customer service without much expense is the basis for the proliferation and expansion of call centers. Call centers are unique blends of people and technology, where performance indicates combining appropriate technology tools with sound management practices built on key operational data. While the technology is fascinating, the people working in call centers and the skill of the management team ultimately make a difference to their companies.

  6. Advanced Architectures for Astrophysical Supercomputing

    CERN Document Server

    Barsdell, Benjamin R; Fluke, Christopher J

    2010-01-01

    Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of $O(100\\times)$ in general-purpose computation -- performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.

  7. Supercomputer debugging workshop '92

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.S.

    1993-01-01

    This report contains papers or viewgraphs on the following topics: The ABCs of Debugging in the 1990s; Cray Computer Corporation; Thinking Machines Corporation; Cray Research, Incorporated; Sun Microsystems, Inc; Kendall Square Research; The Effects of Register Allocation and Instruction Scheduling on Symbolic Debugging; Debugging Optimized Code: Currency Determination with Data Flow; A Debugging Tool for Parallel and Distributed Programs; Analyzing Traces of Parallel Programs Containing Semaphore Synchronization; Compile-time Support for Efficient Data Race Detection in Shared-Memory Parallel Programs; Direct Manipulation Techniques for Parallel Debuggers; Transparent Observation of XENOOPS Objects; A Parallel Software Monitor for Debugging and Performance Tools on Distributed Memory Multicomputers; Profiling Performance of Inter-Processor Communications in an iWarp Torus; The Application of Code Instrumentation Technology in the Los Alamos Debugger; and CXdb: The Road to Remote Debugging.

  8. Supercomputing "Grid" passes latest test

    CERN Multimedia

    Dumé, Belle

    2005-01-01

    When the Large Hadron Collider (LHC) comes online at the CERN in 2007, it will produce more data than any other experiment in the history of physics. Particle physicists have now passed another milestone in their preparations for the LHC by sustaining a continuous flow of 600 megabytes of dat per second (MB/s) for 10 days from the Geneva laboratory to seven sites in Europe and the US (1/2 page)

  9. [Teacher enhancement at Supercomputing `96

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-02-13

    The SC`96 Education Program provided a three-day professional development experience for middle and high school science, mathematics, and computer technology teachers. The program theme was Computers at Work in the Classroom, and a majority of the sessions were presented by classroom teachers who have had several years experience in using these technologies with their students. The teachers who attended the program were introduced to classroom applications of computing and networking technologies and were provided to the greatest extent possible with lesson plans, sample problems, and other resources that could immediately be used in their own classrooms. The attached At a Glance Schedule and Session Abstracts describes in detail the three-day SC`96 Education Program. Also included is the SC`96 Education Program evaluation report and the financial report.

  10. Research on Non-intervention Information Acquisition and Public Sentiment Analysis System for Public Wi-Fi Wireless Networks Based on Supercomputer Platform%基于超算平台的公共Wi-Fi无线网络无痕信息获取与舆情分析系统研究

    Institute of Scientific and Technical Information of China (English)

    杨明; 舒明雷; 顾卫东; 郭强; 周书旺

    2013-01-01

    提出一种利用国家超级计算济南中心的千万亿次计算平台对整个城市范围内的公共Wi-Fi无线网络进行信息获取和舆情分析的系统,它基于非介入式的无线数据包捕获技术、Web页面还原与容错修复技术、多种文本挖掘技术和海量数据处理技术,可对公共Wi-Fi无线网络中的各种非法行为进行取证,对网络舆情进行准确分析和预测,可为相关部门的网络舆论导向工作提供全面准确的参考.%An information acquisition and public sentiment analysis system for the city public Wi-Fi wireless networks was presented, which uses the petaflops computing platform in National Supercomputer Center in Ji'nan. Based on the non-intervention wireless packets capture technology,Web page recovery and fault-tolerant reassembly technology,multiple text data mining technology and mass data process technology,the system can implement the functionality of wireless network forensics, public sentiment analysis and prediction,and provide overall and accurate references for the guidance of public sentiment for the government.

  11. Reliability and validity of the UCSD Performance-based Skills Assessment-Brief%加州大学圣地亚哥分校基于任务的生活能力测验简版的临床信效度

    Institute of Scientific and Technical Information of China (English)

    崔界峰; 邹义壮; 王健; 陈楠; 范宏振; 姚晶; 段京辉

    2012-01-01

    Objective: To explore the reliability and validity of Chinese version of UCSD (University of California, San Diego) Performance-based Skills Assessment-Brief (UPSA-B), which is used to assess the functional capacity in patients with schizophrenia. Methods: A total of 180 inpatients with schizophrenia meeting the diagnosis criteria of International Classification of Diseases and Related Health Problems, Tenth Revision (1CD-10) were recruited by convenient sampling. They were assessed with the UPSA-B, MATRICS Consensus Cognitive Battery (MCCB), Positive and Negative Syndrome Scale (PANSS), Nurses Observation Scale for lnpatient Evaluation (NOSIE) and Schizophrenia Quality of Life Scale (SQLS). Totally 116 healthy subjects from community were selected and assessed with the UPSA-B and MCCB, to compute average completion duration, floor and ceiling effects, discrimination and concurrent validity. Among these samples, 5 schizophrenic patients and 5 normal controls respectively were assessed by 6 testers for the inter-tester reliability. Another 30 schizophrenic inpatients received test and retest of UPSA-B at 4 weeks interval for the test-retest reliability and practice effect Results: Average completion duration of UPSA-B was (11.6 ±2. 8)min. The UPSA-B showed floor effects (zero score) (0.6%, 1 person-time) and ceiling effects (full score) (0.6%, 1 person-time), and little practice effect (ES = 0.07, P = 0. 626) of 4-week test-retest The test-retest and inter-tester reliability coefficient of UPSA-B were 0. 75 and 0. 91 (ICC) respectively in schizophrenia. The Cronbach's a were 0. 83 and 0.72 for schizophrenia and normal control respectively. In schizophrenic patients, the average score of UPSA-B was lower than that in healthy control by 1 -2 standard deviations (P<0.001), with ES value of 0. 76 -0. 93. Discriminant analysis showed that the sensitivity, specificity and diagnostic consistency were 67. 2%, 78. 4% and 71. 6% respectively. The UPSA-B scores were

  12. Implementation and scaling of the fully coupled Terrestrial Systems Modeling Platform (TerrSysMP in a massively parallel supercomputing environment – a case study on JUQUEEN (IBM Blue Gene/Q

    Directory of Open Access Journals (Sweden)

    F. Gasper

    2014-06-01

    Full Text Available Continental-scale hyper-resolution simulations constitute a grand challenge in characterizing non-linear feedbacks of states and fluxes of the coupled water, energy, and biogeochemical cycles of terrestrial systems. Tackling this challenge requires advanced coupling and supercomputing technologies for earth system models that are discussed in this study, utilizing the example of the implementation of the newly developed Terrestrial Systems Modeling Platform (TerrSysMP on JUQUEEN (IBM Blue Gene/Q of the Jülich Supercomputing Centre, Germany. The applied coupling strategies rely on the Multiple Program Multiple Data (MPMD paradigm and require memory and load balancing considerations in the exchange of the coupling fields between different component models and allocation of computational resources, respectively. These considerations can be reached with advanced profiling and tracing tools leading to the efficient use of massively parallel computing environments, which is then mainly determined by the parallel performance of individual component models. However, the problem of model I/O and initialization in the peta-scale range requires major attention, because this constitutes a true big data challenge in the perspective of future exa-scale capabilities, which is unsolved.

  13. Implementation and scaling of the fully coupled Terrestrial Systems Modeling Platform (TerrSysMP) in a massively parallel supercomputing environment - a case study on JUQUEEN (IBM Blue Gene/Q)

    Science.gov (United States)

    Gasper, F.; Goergen, K.; Kollet, S.; Shrestha, P.; Sulis, M.; Rihani, J.; Geimer, M.

    2014-06-01

    Continental-scale hyper-resolution simulations constitute a grand challenge in characterizing non-linear feedbacks of states and fluxes of the coupled water, energy, and biogeochemical cycles of terrestrial systems. Tackling this challenge requires advanced coupling and supercomputing technologies for earth system models that are discussed in this study, utilizing the example of the implementation of the newly developed Terrestrial Systems Modeling Platform (TerrSysMP) on JUQUEEN (IBM Blue Gene/Q) of the Jülich Supercomputing Centre, Germany. The applied coupling strategies rely on the Multiple Program Multiple Data (MPMD) paradigm and require memory and load balancing considerations in the exchange of the coupling fields between different component models and allocation of computational resources, respectively. These considerations can be reached with advanced profiling and tracing tools leading to the efficient use of massively parallel computing environments, which is then mainly determined by the parallel performance of individual component models. However, the problem of model I/O and initialization in the peta-scale range requires major attention, because this constitutes a true big data challenge in the perspective of future exa-scale capabilities, which is unsolved.

  14. DoE Plasma Center for Momentum Transport and Flow Self-Organization in Plasmas: Non-linear Emergent Structure Formation in magnetized Plasmas and Rotating Magnetofluids

    Energy Technology Data Exchange (ETDEWEB)

    Forest, Cary B. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics

    2016-11-10

    This report covers the UW-Madison activities that took place within a larger DoE Center Administered and directed by Professor George Tynan at the University of California, San Diego. The work at Wisconsin will also be covered in the final reporting for the entire center, which will be submitted by UCSD. There were two main activities, one experimental and one that was theoretical in nature, as part of the Center activities at the University of Wisconsin, Madison. First, the Center supported an experimentally focused postdoc (Chris Cooper) to carry out fundamental studies of momentum transport in rotating and weakly magnetized plasma. His experimental work was done on the Plasma Couette Experiment, a cylindrical plasma confinement device, with a plasma flow created through electromagnetically stirring plasma at the plasma edge facilitated by arrays of permanent magnets. Cooper's work involved developing optical techniques to measure the ion temperature and plasma flow through Doppler-shifted line radiation from the plasma argon ions. This included passive emission measurements and development of a novel ring summing Fabry-Perot spectroscopy system, and the active system involved using a diode laser to induce fluorescence. On the theoretical side, CMTFO supported a postdoc (Johannes Pueschel) to carry out a gyrokinetic extension of residual zonal flow theory to the case with magnetic fluctuations, showing that magnetic stochasticity disrupts zonal flows. The work included a successful comparison with gyrokinetic simulations. This work and its connection to the broader CMTFO will be covered more thoroughly in the final CMTFO report from Professor Tynan.

  15. Relativistic Guiding Center Equations

    Energy Technology Data Exchange (ETDEWEB)

    White, R. B. [PPPL; Gobbin, M. [Euratom-ENEA Association

    2014-10-01

    In toroidal fusion devices it is relatively easy that electrons achieve relativistic velocities, so to simulate runaway electrons and other high energy phenomena a nonrelativistic guiding center formalism is not sufficient. Relativistic guiding center equations including flute mode time dependent field perturbations are derived. The same variables as used in a previous nonrelativistic guiding center code are adopted, so that a straightforward modifications of those equations can produce a relativistic version.

  16. Test Control Center exhibit

    Science.gov (United States)

    2000-01-01

    Have you ever wondered how the engineers at John C. Stennis Space Center in Hancock County, Miss., test fire a Space Shuttle Main Engine? The Test Control Center exhibit at StenniSphere can answer your questions by simulating the test firing of a Space Shuttle Main Engine. A recreation of one of NASA's test control centers, the exhibit explains and portrays the 'shake, rattle and roar' that happens during a real test firing.

  17. Great Lakes Science Center

    Data.gov (United States)

    Federal Laboratory Consortium — Since 1927, Great Lakes Science Center (GLSC) research has provided critical information for the sound management of Great Lakes fish populations and other important...

  18. Electron Microscopy Center (EMC)

    Data.gov (United States)

    Federal Laboratory Consortium — The Electron Microscopy Center (EMC) at Argonne National Laboratory develops and maintains unique capabilities for electron beam characterization and applies those...

  19. Center for Deployment Psychology

    Data.gov (United States)

    Federal Laboratory Consortium — The Center for Deployment Psychology was developed to promote the education of psychologists and other behavioral health specialists about issues pertaining to the...

  20. Test Control Center (TCC)

    Data.gov (United States)

    Federal Laboratory Consortium — The Test Control Center (TCC) provides a consolidated facility for planning, coordinating, controlling, monitoring, and analyzing distributed test events. ,The TCC...

  1. Environmental Modeling Center

    Data.gov (United States)

    Federal Laboratory Consortium — The Environmental Modeling Center provides the computational tools to perform geostatistical analysis, to model ground water and atmospheric releases for comparison...

  2. Audio Visual Center

    Data.gov (United States)

    Federal Laboratory Consortium — The Audiovisual Services Center provides still photographic documentation with laboratory support, video documentation, video editing, video duplication, photo/video...

  3. Small Business Development Center

    Data.gov (United States)

    Small Business Administration — Small Business Development Centers (SBDCs) provide assistance to small businesses and aspiring entrepreneurs throughout the United States and its territories. SBDCs...

  4. Data Center at NICT

    Science.gov (United States)

    Ichikawa, Ryuichi; Sekido, Mamoru

    2013-01-01

    The Data Center at the National Institute of Information and Communications Technology (NICT) archives and releases the databases and analysis results processed at the Correlator and the Analysis Center at NICT. Regular VLBI sessions of the Key Stone Project VLBI Network were the primary objective of the Data Center. These regular sessions continued until the end of November 2001. In addition to the Key Stone Project VLBI sessions, NICT has been conducting geodetic VLBI sessions for various purposes, and these data are also archived and released by the Data Center.

  5. Advanced Simulation Center

    Data.gov (United States)

    Federal Laboratory Consortium — The Advanced Simulation Center consists of 10 individual facilities which provide missile and submunition hardware-in-the-loop simulation capabilities. The following...

  6. Surgery center joint ventures.

    Science.gov (United States)

    Zasa, R J

    1999-01-01

    Surgery centers have been accepted as a cost effective, patient friendly vehicle for delivery of quality ambulatory care. Hospitals and physician groups also have made them the vehicles for coming together. Surgery centers allow hospitals and physicians to align incentives and share benefits. It is one of the few types of health care businesses physicians can own without anti-fraud and abuse violation. As a result, many surgery center ventures are now jointly owned by hospitals and physician groups. This article outlines common structures that have been used successfully to allow both to own and govern surgery centers.

  7. Chemical Security Analysis Center

    Data.gov (United States)

    Federal Laboratory Consortium — In 2006, by Presidential Directive, DHS established the Chemical Security Analysis Center (CSAC) to identify and assess chemical threats and vulnerabilities in the...

  8. Airline Operation Center Workstation

    Data.gov (United States)

    Department of Transportation — The Airline Operation Center Workstation (AOC Workstation) represents equipment available to users of the National Airspace system, outside of the FAA, that enables...

  9. Carbon Monoxide Information Center

    Medline Plus

    Full Text Available ... main content Languages 简体中文 English Bahasa Indonesia 한국어 Español ภาษาไทย Tiếng Việt Text Size: Decrease Font Increase ... Monoxide Information Center Carbon Monoxide Information Center En Español The Invisible Killer Carbon monoxide, also known as ...

  10. Assessing the Assessment Center.

    Science.gov (United States)

    LaRue, James

    1989-01-01

    Describes the historical use of assessment centers as staff development and promotional tools and their current use in personnel selection. The elements that constitute a true assessment center are outlined, and a discussion of the advantages and disadvantages for employers and applicants focuses on positions in library administration. (10…

  11. Dimensioning large call centers

    NARCIS (Netherlands)

    S.C. Borst (Sem); A. Mandelbaum; M.I. Reiman

    2000-01-01

    textabstractWe develop a framework for asymptotic optimization of a queueing system. The motivation is the staffing problem of call centers with 100's of agents (or more). Such a call center is modeled as an M/M/N queue, where the number of agents~$N$ is large. Within our framework, we determine the

  12. Information Centers at NAL.

    Science.gov (United States)

    Frank, Robyn C.

    1989-01-01

    Descriptions of the 12 specialized information centers of the National Agricultural Library (NAL) include subject coverage, information services provided, information technologies used, and staffing. The development of the Rural Information Center, a joint venture between the Extension Service and NAL to provide information services to local…

  13. Handbook for Learning Centers.

    Science.gov (United States)

    Norwalk Board of Education, CT.

    The handbook for learning centers contains guidelines, forms, and supplementary information to be used with all children identified as having a learning disability, mild retardation, or sensory deprivation in the Norwalk, Connecticut public schools. It is stressed that the learning center should provide supportive services for at least 35 minutes…

  14. Funding Opportunity: Genomic Data Centers

    Science.gov (United States)

    Funding Opportunity CCG, Funding Opportunity Center for Cancer Genomics, CCG, Center for Cancer Genomics, CCG RFA, Center for cancer genomics rfa, genomic data analysis network, genomic data analysis network centers,

  15. PTSD: National Center for PTSD

    Medline Plus

    Full Text Available ... Locator Hospitals and Clinics Vet Centers Regional Benefits Offices Regional Loan Centers Cemetery Locations Search Enter your ... Clinic Locations Hospitals & Clinics Vet Centers Regional Benefits Offices Regional Loan Centers Cemetery Locations Contact Us FAQs ...

  16. Relative Lyapunov Center Bifurcations

    DEFF Research Database (Denmark)

    Wulff, Claudia; Schilder, Frank

    2014-01-01

    Relative equilibria (REs) and relative periodic orbits (RPOs) are ubiquitous in symmetric Hamiltonian systems and occur, for example, in celestial mechanics, molecular dynamics, and rigid body motion. REs are equilibria, and RPOs are periodic orbits of the symmetry reduced system. Relative Lyapunov...... center bifurcations are bifurcations of RPOs from REs corresponding to Lyapunov center bifurcations of the symmetry reduced dynamics. In this paper we first prove a relative Lyapunov center theorem by combining recent results on the persistence of RPOs in Hamiltonian systems with a symmetric Lyapunov...... center theorem of Montaldi, Roberts, and Stewart. We then develop numerical methods for the detection of relative Lyapunov center bifurcations along branches of RPOs and for their computation. We apply our methods to Lagrangian REs of the N-body problem....

  17. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  18. Energy efficient data centers

    Energy Technology Data Exchange (ETDEWEB)

    Tschudi, William; Xu, Tengfang; Sartor, Dale; Koomey, Jon; Nordman, Bruce; Sezgen, Osman

    2004-03-30

    Data Center facilities, prevalent in many industries and institutions are essential to California's economy. Energy intensive data centers are crucial to California's industries, and many other institutions (such as universities) in the state, and they play an important role in the constantly evolving communications industry. To better understand the impact of the energy requirements and energy efficiency improvement potential in these facilities, the California Energy Commission's PIER Industrial Program initiated this project with two primary focus areas: First, to characterize current data center electricity use; and secondly, to develop a research ''roadmap'' defining and prioritizing possible future public interest research and deployment efforts that would improve energy efficiency. Although there are many opinions concerning the energy intensity of data centers and the aggregate effect on California's electrical power systems, there is very little publicly available information. Through this project, actual energy consumption at its end use was measured in a number of data centers. This benchmark data was documented in case study reports, along with site-specific energy efficiency recommendations. Additionally, other data center energy benchmarks were obtained through synergistic projects, prior PG&E studies, and industry contacts. In total, energy benchmarks for sixteen data centers were obtained. For this project, a broad definition of ''data center'' was adopted which included internet hosting, corporate, institutional, governmental, educational and other miscellaneous data centers. Typically these facilities require specialized infrastructure to provide high quality power and cooling for IT equipment. All of these data center types were considered in the development of an estimate of the total power consumption in California. Finally, a research ''roadmap'' was developed

  19. National Farm Medicine Center

    Science.gov (United States)

    Research Areas Applied Sciences Biomedical Informatics Clinical Research Epidemiology Farm Medicine Human Genetics Oral-Systemic Health Clinical ... Consulting Agritourism Farm MAPPER Lyme Disease ROPS Rebate Zika Virus National Farm Medicine Center The National Farm ...

  20. Mental Health Screening Center

    Science.gov (United States)

    ... Releases & Announcements Public Service Announcements Partnering with DBSA Mental Health Screening Center These online screening tools are not ... you have any concerns, see your doctor or mental health professional. Depression This screening form was developed from ...

  1. Carbon Monoxide Information Center

    Medline Plus

    Full Text Available ... CONSUMER PRODUCT SAFETY COMMISSION Search CPSC Search Menu Home Recalls Recall List CPSC Recall API Recall Lawsuits ... and Bans Report an Unsafe Product Consumers Businesses Home Safety Education Safety Education Centers Carbon Monoxide Information ...

  2. USU Patient Simulation Center

    Data.gov (United States)

    Federal Laboratory Consortium — he National Capital Area (NCA) Medical Simulation Center is a state-of-the-art training facility located near the main USU campus. It uses simulated patients (i.e.,...

  3. Global Hydrology Research Center

    Data.gov (United States)

    National Aeronautics and Space Administration — The GHRC is the data management and user services arm of the Global Hydrology and Climate Center. It encompasses the data and information management, supporting...

  4. National Automotive Center - NAC

    Data.gov (United States)

    Federal Laboratory Consortium — Encouraged by the advantages of collaboration, the U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) worked with the Secretary of the...

  5. Reliability Centered Maintenance - Methodologies

    Science.gov (United States)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  6. HUD Homeownership Centers

    Data.gov (United States)

    Department of Housing and Urban Development — HUD Homeownership Centers (HOCs) insure single family Federal Housing Administration (FHA) mortgages and oversee the selling of HUD homes. FHA has four Homeownership...

  7. Hazardous Waste Research Center

    Data.gov (United States)

    Federal Laboratory Consortium — A full-service research and evaluation center equipped with safety equipment, a high-bay pilot studies area, and a large-scale pilot studies facility The U.S. Army...

  8. World Trade Center

    Index Scriptorium Estoniae

    2006-01-01

    Esilinastus katastroofifilm "World Trade Center" : stsenarist Andrea Berloff : režissöör Oliver Stone : kunstnik Jan Roelfs : osades Nicholas Cage, Michael Pena, Stephen Dorff jpt : Ameerika Ühendriigid 2006. Ka filmi prototüüpidest

  9. Health Center Controlled Network

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Health Center Controlled Network (HCCN) tool is a locator tool designed to make data and information concerning HCCN resources more easily available to our...

  10. The ORFEUS Data Center

    Directory of Open Access Journals (Sweden)

    B. Dost

    1994-06-01

    Full Text Available 1993 the ORFEUS Data Center (ODC; Dost, 1991 changed hosting organisation. It moved within the Netherlands from the University of Utrecht to the Royal Netherlands Meteorological Institute (KNM1 in de Bilt. This change in hosting organisation was necessary to ensure a longer term stability in the operation of the ODC. Key issues for the ODC are the rapid on-line data access and quality controlled, complete and efficient off-line data access. During 1992 the ODC became the European node in the international SPYDER system which provides near real-time access to digital broadband data from selected high quality stations. Electronic messages trigger soveral centers well distributed over the globe. These centers then collect the data by modem from selected stations in their region. Finally, data are distributed between data centers over internet.

  11. Advanced data center economy

    OpenAIRE

    Sukhov, R.; Amzarakov, M.; E. Isaev

    2013-01-01

    The article addresses basic Data Centers (DC) drivers of price and engineering, which specify rules and price evaluation for creation and further operation. DC energy efficiency concept, its influence on DC initial price, operation costs and Total Cost of Ownership.

  12. World Trade Center

    Index Scriptorium Estoniae

    2006-01-01

    Esilinastus katastroofifilm "World Trade Center" : stsenarist Andrea Berloff : režissöör Oliver Stone : kunstnik Jan Roelfs : osades Nicholas Cage, Michael Pena, Stephen Dorff jpt : Ameerika Ühendriigid 2006. Ka filmi prototüüpidest

  13. Carbon Monoxide Information Center

    Medline Plus

    Full Text Available ... Community Outreach Resource Center Toy Recall Statistics CO Poster Contest Pool Safely Business & Manufacturing Business & Manufacturing Business ... Featured Resources CPSC announces winners of carbon monoxide poster contest Video View the blog Clues You Can ...

  14. Cooperative Tagging Center (CTC)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Cooperative Tagging Center (CTC) began as the Cooperative Game Fish Tagging Program (GTP) at Woods Hole Oceanographic Institute (WHOI) in 1954. The GTP was...

  15. NMA Analysis Center

    Science.gov (United States)

    Kierulf, Halfdan Pascal; Andersen, Per Helge

    2013-01-01

    The Norwegian Mapping Authority (NMA) has during the last few years had a close cooperation with Norwegian Defence Research Establishment (FFI) in the analysis of space geodetic data using the GEOSAT software. In 2012 NMA has taken over the full responsibility for the GEOSAT software. This implies that FFI stopped being an IVS Associate Analysis Center in 2012. NMA has been an IVS Associate Analysis Center since 28 October 2010. NMA's contributions to the IVS as an Analysis Centers focus primarily on routine production of session-by-session unconstrained and consistent normal equations by GEOSAT as input to the IVS combined solution. After the recent improvements, we expect that VLBI results produced with GEOSAT will be consistent with results from the other VLBI Analysis Centers to a satisfactory level.

  16. Center for Women Veterans

    Science.gov (United States)

    ... 2014-2020 VA Plans, Budget, & Performance VA Claims Representation RESOURCES Careers at VA Employment Center Returning Service Members Vocational Rehabilitation & Employment Homeless Veterans Women Veterans Minority Veterans Plain Language Surviving Spouses & Dependents ...

  17. Centering in Japanese Discourse

    CERN Document Server

    Walker, M; Côté, S; Walker, Marilyn; Iida, Masayo; Cote, Sharon

    1996-01-01

    In this paper we propose a computational treatment of the resolution of zero pronouns in Japanese discourse, using an adaptation of the centering algorithm. We are able to factor language-specific dependencies into one parameter of the centering algorithm. Previous analyses have stipulated that a zero pronoun and its cospecifier must share a grammatical function property such as {\\sc Subject} or {\\sc NonSubject}. We show that this property-sharing stipulation is unneeded. In addition we propose the notion of {\\sc topic ambiguity} within the centering framework, which predicts some ambiguities that occur in Japanese discourse. This analysis has implications for the design of language-independent discourse modules for Natural Language systems. The centering algorithm has been implemented in an HPSG Natural Language system with both English and Japanese grammars.

  18. Accredited Birth Centers

    Science.gov (United States)

    ... 9743 Accredited since January 2016 98 Bright Eyes Midwifery and Wild Rivers Women's Health Accredited 29135 Ellensburg ... Accredited since November 2015 96 Footprints in Time Midwifery Services and Birth Center Accredited 351 N. Water ...

  19. FEMA Disaster Recovery Centers

    Data.gov (United States)

    Department of Homeland Security — This is a search site for FEMA's Disaster Recovery Centers (DRC). A DRC is a readily accessible facility or mobile office set up by FEMA where applicants may go for...

  20. Center for Contaminated Sediments

    Data.gov (United States)

    Federal Laboratory Consortium — The U.S. Army Corps of Engineers Center for Contaminated Sediments serves as a clearinghouse for technology and expertise concerned with contaminated sediments. The...

  1. Center Innovation Fund Program

    Data.gov (United States)

    National Aeronautics and Space Administration — To stimulate and encourage creativity and innovation within the NASA Centers. The activities are envisioned to fall within the scope of NASA Space Technology or...

  2. Advanced Missile Signature Center

    Data.gov (United States)

    Federal Laboratory Consortium — The Advanced Missile Signature Center (AMSC) is a national facility supporting the Missile Defense Agency (MDA) and other DoD programs and customers with analysis,...

  3. Test Control Center

    Science.gov (United States)

    2000-01-01

    At the test observation periscope in the Test Control Center exhibit in StenniSphere at the John C. Stennis Space Center in Hancock County, Miss., visitors can observe a test of a Space Shuttle Main Engine exactly as test engineers might see it during a real engine test. The Test Control Center exhibit exactly simulates not only the test control environment, but also the procedure of testing a rocket engine. Designed to entertain while educating, StenniSphere includes informative dispays and exhibits from NASA's lead center for rocket propulsion and remote sensing applications. StenniSphere is open free of charge from 9 a.m. to 5 p.m. daily.

  4. Oil Reserve Center Established

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Like other countries,China has started to grow its strategic oil reserve in case oil supplies are cut On December 18,2007,the National Development and Reform Commission(NDRC),China’s top economic planner,announced that the national oil reserve center has been officially launched.The supervisory system over the oil reserves has three levels: the energy department of the NDRC,the oil reserve center,and the reserve bases.

  5. National Biocontainment Training Center

    Science.gov (United States)

    2015-08-01

    than ever before as scientists push to understand the pathology and develop diagnostics, vaccines and therapeutics for deadly diseases like Ebola...Hardcastle, Vickie Jones, Sheri Leavitt, and Belinda Rivera. Gulf Coast Consortium Postdoctoral Veterinary Training Program - A clinical veterinarian from...the Center for Comparative Medicine at Baylor College of Medicine in Houston and a veterinarian from the University of Texas Health Science Center

  6. Data center cooling system

    Energy Technology Data Exchange (ETDEWEB)

    Chainer, Timothy J; Dang, Hien P; Parida, Pritish R; Schultz, Mark D; Sharma, Arun

    2015-03-17

    A data center cooling system may include heat transfer equipment to cool a liquid coolant without vapor compression refrigeration, and the liquid coolant is used on a liquid cooled information technology equipment rack housed in the data center. The system may also include a controller-apparatus to regulate the liquid coolant flow to the liquid cooled information technology equipment rack through a range of liquid coolant flow values based upon information technology equipment temperature thresholds.

  7. Argonne's Laboratory computing center - 2007 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R.; Pieper, G. W.

    2008-05-28

    Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific

  8. "Infotonics Technology Center"

    Energy Technology Data Exchange (ETDEWEB)

    Fritzemeier, L. [Infotonics Technology Center Inc., Canandaigua, NY (United States); Boysel, M. B. [Infotonics Technology Center Inc., Canandaigua, NY (United States); Smith, D. R. [Infotonics Technology Center Inc., Canandaigua, NY (United States)

    2004-09-30

    During this grant period July 15, 2002 thru September 30, 2004, the Infotonics Technology Center developed the critical infrastructure and technical expertise necessary to accelerate the development of sensors, alternative lighting and power sources, and other specific subtopics of interest to Department of Energy. Infotonics fosters collaboration among industry, universities and government and operates as a national center of excellence to drive photonics and microsystems development and commercialization. A main goal of the Center is to establish a unique, world-class research and development facility. A state-of-the-art microsystems prototype and pilot fabrication facility was established to enable rapid commercialization of new products of particular interest to DOE. The Center has three primary areas of photonics and microsystems competency: device research and engineering, packaging and assembly, and prototype and pilot-scale fabrication. Center activities focused on next generation optical communication networks, advanced imaging and information sensors and systems, micro-fluidic systems, assembly and packaging technologies, and biochemical sensors. With targeted research programs guided by the wealth of expertise of Infotonics business and scientific staff, the fabrication and packaging facility supports and accelerates innovative technology development of special interest to DOE in support of its mission and strategic defense, energy, and science goals.

  9. Engineer Research and Development Center's Materials Testing Center (MTC)

    Data.gov (United States)

    Federal Laboratory Consortium — The Engineer Research and Development Center's Materials Testing Center (MTC) is committed to quality testing and inspection services that are delivered on time and...

  10. International Water Center

    Science.gov (United States)

    The urban district of Nancy and the Town of Nancy, France, have taken the initiative of creating an International Center of Water (Centre International de l'Eau à Nancy—NAN.C.I.E.) in association with two universities, six engineering colleges, the Research Centers of Nancy, the Rhine-Meuse Basin Agency, and the Chamber of Commerce and Industry. The aim of this center is to promote research and technology transfer in the areas of water and sanitation. In 1985 it will initiate a research program drawing on the experience of 350 researchers and engineers of various disciplines who have already been assigned to research in these fields. The research themes, the majority of which will be multidisciplinary, concern aspects of hygiene and health, the engineering of industrial processes, water resources, and the environment and agriculture. A specialist training program offering five types of training aimed at university graduates, graduates of engineering colleges, or experts, will start in October 1984.

  11. PTSD: National Center for PTSD

    Medline Plus

    Full Text Available ... Vet Centers) War Related Illness & Injury Study Center Homeless Veterans Returning Service Members Rural Veterans Seniors & Aging ... Employment Center Returning Service Members Vocational Rehabilitation & Employment Homeless Veterans Women Veterans Minority Veterans Plain Language Surviving ...

  12. Hospitals report on cancer centers.

    Science.gov (United States)

    Rees, T

    2001-01-01

    Woman's Hospital, Baton Rouge, La., is first-place winner among cancer centers. Holy Cross Hospital's Michael and Dianne Bienes Comprehensive Cancer Center, Ft. Lauderdale, Fla., is named second; and, Cardinal Health System's Ball Cancer Center, Muncie, Ind., third.

  13. Patient-centered Care.

    Science.gov (United States)

    Reynolds, April

    2009-01-01

    Patient-centered care focuses on the patient and the individual's particular health care needs. The goal of patient-centered health care is to empower patients to become active participants in their care. This requires that physicians, radiologic technologists and other health care providers develop good communication skills and address patient needs effectively. Patient-centered care also requires that the health care provider become a patient advocate and strive to provide care that not only is effective but also safe. For radiologic technologists, patient-centered care encompasses principles such as the as low as reasonably achievable (ALARA) concept and contrast media safety. Patient-centered care is associated with a higher rate of patient satisfaction, adherence to suggested lifestyle changes and prescribed treatment, better outcomes and more cost-effective care. This article is a Directed Reading. Your access to Directed Reading quizzes for continuing education credit is determined by your area of interest. For access to other quizzes, go to www.asrt.org/store. According to one theory, most patients judge the quality of their healthcare much like they rate an airplane flight. They assume that the airplane is technically viable and is being piloted by competent people. Criteria for judging a particular airline are personal and include aspects like comfort, friendly service and on-time schedules. Similarly, patients judge the standard of their healthcare on nontechnical aspects, such as a healthcare practitioner's communication and "soft skills." Most are unable to evaluate a practitioner's level of technical skill or training, so the qualities they can assess become of the utmost importance in satisfying patients and providing patient-centered care.(1).

  14. Hydrologic Modeling at the National Water Center: Operational Implementation of the WRF-Hydro Model to support National Weather Service Hydrology

    Science.gov (United States)

    Cosgrove, B.; Gochis, D.; Clark, E. P.; Cui, Z.; Dugger, A. L.; Fall, G. M.; Feng, X.; Fresch, M. A.; Gourley, J. J.; Khan, S.; Kitzmiller, D.; Lee, H. S.; Liu, Y.; McCreight, J. L.; Newman, A. J.; Oubeidillah, A.; Pan, L.; Pham, C.; Salas, F.; Sampson, K. M.; Smith, M.; Sood, G.; Wood, A.; Yates, D. N.; Yu, W.; Zhang, Y.

    2015-12-01

    The National Weather Service (NWS) National Water Center(NWC) is collaborating with the NWS National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR) to implement a first-of-its-kind operational instance of the Weather Research and Forecasting (WRF)-Hydro model over the Continental United States (CONUS) and contributing drainage areas on the NWS Weather and Climate Operational Supercomputing System (WCOSS) supercomputer. The system will provide seamless, high-resolution, continuously cycling forecasts of streamflow and other hydrologic outputs of value from both deterministic- and ensemble-type runs. WRF-Hydro will form the core of the NWC national water modeling strategy, supporting NWS hydrologic forecast operations along with emergency response and water management efforts of partner agencies. Input and output from the system will be comprehensively verified via the NWC Water Resource Evaluation Service. Hydrologic events occur on a wide range of temporal scales, from fast acting flash floods, to long-term flow events impacting water supply. In order to capture this range of events, the initial operational WRF-Hydro configuration will feature 1) hourly analysis runs, 2) short-and medium-range deterministic forecasts out to two day and ten day horizons and 3) long-range ensemble forecasts out to 30 days. All three of these configurations are underpinned by a 1km execution of the NoahMP land surface model, with channel routing taking place on 2.67 million NHDPlusV2 catchments covering the CONUS and contributing areas. Additionally, the short- and medium-range forecasts runs will feature surface and sub-surface routing on a 250m grid, while the hourly analyses will feature this same 250m routing in addition to nudging-based assimilation of US Geological Survey (USGS) streamflow observations. A limited number of major reservoirs will be configured within the model to begin to represent the first-order impacts of

  15. Call Center Capacity Planning

    DEFF Research Database (Denmark)

    Nielsen, Thomas Bang

    in modern call centers allows for a high level of customization, but also induces complicated operational processes. The size of the industry together with the complex and labor intensive nature of large call centers motivates the research carried out to understand the underlying processes. The customizable...... groups are further analyzed. The design of the overflow policies is optimized using Markov Decision Processes and a gain with regard to service levels is obtained. Also, the fixed threshold policy is investigated and found to be appropriate when one class is given high priority and when it is desired...

  16. User Centered Design

    DEFF Research Database (Denmark)

    Egbert, Maria; Matthews, Ben

    2012-01-01

    The interdisciplinary approach of User Centered Design is presented here with a focus on innovation in the design and use of hearing technologies as well as on the potential of innovation in interaction. This approach is geared towards developing new products, systems, technologies and practices...... based on an understanding of why so few persons with hearing loss use the highly advanced hearing technologies. In integrating Conversation Analysis (“CA”), audiology and User Centered Design, three disciplines which are collaborating together for the first time, we are addressing the following...

  17. QUAD FAMILY CENTERING.

    Energy Technology Data Exchange (ETDEWEB)

    PINAYEV, I.

    2005-11-01

    It is well known that beam position monitors (BPM) utilizing signals from pickup electrodes (PUE) provide good resolution and relative accuracy. The absolute accuracy (i.e. position of the orbit in the vacuum chamber) is not very good due to the various reasons. To overcome the limitation it was suggested to use magnetic centers of quadrupoles for the calibration of the BPM [1]. The proposed method provides accuracy better then 200 microns for centering of the beam position monitors using modulation of the whole quadrupole family.

  18. Xichang Satellite Launch Center

    Institute of Scientific and Technical Information of China (English)

    LiuJie

    2004-01-01

    Xichang Satellite Launch Center(XSLC) is mainly for geosynchronous orbit launches. The main purpose of XSLC is to launch spacecraft, such as broadcasting,communications and meteorological satellites, into geo-stationary orbit.Most of the commercial satellite launches of Long March vehicles have been from Xichang Satellite Launch Center. With 20 years' development,XSLC can launch 5 kinds of launch vehicles and send satellites into geostationary orbit and polar orbit. In the future, moon exploration satellites will also be launched from XSLC.

  19. Data Center Energy Retrofits

    OpenAIRE

    PervilÀ, Mikko

    2013-01-01

    Within the field of computer science, data centers (DCs) are a major consumer of energy. A large part of that energy is used for cooling down the exhaust heat of the servers contained in the DCs. This thesis describes both the aggregate numbers of DCs and key flagship installations in detail. We then introduce the concept of Data Center Energy Retrofits, a set of low cost, easy to install techniques that may be used by the majority of DCs for reducing their energy consumption. The main c...

  20. Lied Transplant Center

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-02-01

    The Department of Energy has prepared an Environmental Assessment (DOE/EA-1143) evaluating the construction, equipping and operation of the proposed Lied Transplant Center at the University of Nebraska Medical Center in Omaha, Nebraska. Based on the analysis in the EA, the DOE has determined that the proposed action does not constitute a major federal action significantly affecting the quality of the human environment within the meaning of the National Environmental Policy Act of 1969 (NEPA). Therefore, the preparation of an Environmental Statement in not required.

  1. Patient-centered professionalism

    Directory of Open Access Journals (Sweden)

    Rapport F

    2012-03-01

    Full Text Available Hayley A Hutchings, Frances RapportCollege of Medicine, Swansea University, Swansea, Wales, United KingdomIntroduction: Although the concept of patient-centered professionalism has been defined in the literature and adopted to some extent by key health care regulatory bodies, there has been little research that has identified what the concept means to professionals and patients.Aim: The purpose of this paper is to identify the key concepts of patient-centered professionalism as identified in the literature and to discuss these within the context of existing research across a variety of health care settings.Findings: Key documents have been identified from within nursing, medicine, and pharmacy, which outline what is expected of professionals within these professional groups according to their working practices. Although not defined as patient-centered professionalism, the principles outlined in these documents mirror the definitions of patient-centered professional care defined by Irvine and the Picker Institute and are remarkably similar across the three professions. While patients are identified as being at the heart of health care and professional working practice, research within the fields of community nursing and community pharmacy suggests that patient and professional views diverge as regards what is important, according to different group agendas. In addition, the delivery of patient-centered professional care is often difficult to achieve, due to numerous challenges to the provision of patient-centric care.Conclusion: According to the literature, patient-centered professionalism means putting the patient at the heart of care delivery and working in partnership with the patient to ensure patients are well informed and their care choices are respected. However, limited research has examined what the concept means to patients and health care professionals working with patients and how this fits with literature definitions. Further work is

  2. Center for Botanical Interaction Studies

    Data.gov (United States)

    Federal Laboratory Consortium — Research Area: Dietary Supplements, Herbs, Antioxidants Program:Centers for Dietary Supplements Research: Botanicals Description:This center will look at safety and...

  3. Starting a sleep center.

    Science.gov (United States)

    Epstein, Lawrence J; Valentine, Paul S

    2010-05-01

    The demand for sleep medicine services has grown tremendously during the last decade and will likely continue. To date, growth in demand has been met by growth in the number of new sleep centers. The need for more new centers will be dependent on market drivers that include increasing regulatory requirements, personnel shortages, integration of home sleep testing, changes in reimbursement, a shift in emphasis from diagnostics to treatment, and an increased consumer focus on sleep. The decision to open a new center should be based on understanding the market dynamics, completing a market analysis, and developing a business plan. The business plan should include an overview of the facility, a personnel and organizational structure, an evaluation of the business environment, a financial plan, a description of services provided, and a strategy for obtaining, managing, and extending a referral base. Implementation of the business plan and successful operation require ongoing planning and monitoring of operational parameters. The need for new sleep centers will likely continue, but the shifting market dynamics indicate a greater need for understanding the marketplace and careful planning.

  4. Memorial Alexander Center

    Directory of Open Access Journals (Sweden)

    AECK Associates, Arquitectos

    1958-05-01

    Full Text Available En Atlanta, el Instituto Tecnológico de Georgia acaba de ampliar sus instalaciones deportivas, construyendo el Alexander Memorial Center. Consta este nuevo Centro de dos edificios: una pista de baloncesto cubierta y un edificio anejo con vestuarios, duchas, una pista de entrenamiento, equipos técnicos y la emisora de radio Georgia Tech W. G. S. T.

  5. School Based Health Centers

    Science.gov (United States)

    Children's Aid Society, 2012

    2012-01-01

    School Based Health Centers (SBHC) are considered by experts as one of the most effective and efficient ways to provide preventive health care to children. Few programs are as successful in delivering health care to children at no cost to the patient, and where they are: in school. For many underserved children, The Children's Aid Society's…

  6. vCenter troubleshooting

    CERN Document Server

    Mills, Chuck

    2015-01-01

    The book is designed for the competent vCenter administrator or anyone who is responsible for the vSphere environment. It can be used as a guide by vSphere architects and VMware consultants for a successful vSphere solution. You should have good knowledge and an understanding of core elements and applications of the vSphere environment.

  7. LCA Center Denmark

    DEFF Research Database (Denmark)

    Hauschild, Michael Zwicky; Frydendal, Jeppe

    2006-01-01

    product-oriented environmental tools in companies, to ensure that the LCA efforts is based on a solid and scientific basis, and to maintain the well-established co-operation between all important actors in the LCA field in Denmark. A status is given on the achievements of LCA Center Denmark...

  8. Carolinas Energy Career Center

    Energy Technology Data Exchange (ETDEWEB)

    Classens, Anver; Hooper, Dick; Johnson, Bruce

    2013-03-31

    Central Piedmont Community College (CPCC), located in Charlotte, North Carolina, established the Carolinas Energy Career Center (Center) - a comprehensive training entity to meet the dynamic needs of the Charlotte region's energy workforce. The Center provides training for high-demand careers in both conventional energy (fossil) and renewable energy (nuclear and solar technologies/energy efficiency). CPCC completed four tasks that will position the Center as a leading resource for energy career training in the Southeast: • Development and Pilot of a New Advanced Welding Curriculum, • Program Enhancement of Non-Destructive Examination (NDE) Technology, • Student Support through implementation of a model targeted toward Energy and STEM Careers to support student learning, • Project Management and Reporting. As a result of DOE funding support, CPCC achieved the following outcomes: • Increased capacity to serve and train students in emerging energy industry careers; • Developed new courses and curricula to support emerging energy industry careers; • Established new training/laboratory resources; • Generated a pool of highly qualified, technically skilled workers to support the growing energy industry sector.

  9. Data Center Site Redundancy

    OpenAIRE

    Brotherton, H M; Dietz, J. Eric

    2014-01-01

    Commonly, disaster contingency calls for separation of location for redundant locations to maintain the needed redundancy. This document addresses issues for the data center redundancy, including limits to the distribution, distance and location that may impact on the efficiency or energy.

  10. Alternative Fuels Data Center

    Energy Technology Data Exchange (ETDEWEB)

    None

    2013-06-01

    Fact sheet describes the Alternative Fuels Data Center, which provides information, data, and tools to help fleets and other transportation decision makers find ways to reduce petroleum consumption through the use of alternative and renewable fuels, advanced vehicles, and other fuel-saving measures.

  11. Resource Centers; Some Ideas.

    Science.gov (United States)

    Klitzke, Dwight Mark; Starkey, John

    Teachers, Principals, and other public school personnel interested in establishing learning resource centers are provided with guidelines and a framework within which they can structure their efforts. Professional literature, observation, and experimental trials serve as the sources from which observations are drawn. The advantages of the resource…

  12. Mobile PET Center Project

    Science.gov (United States)

    Ryzhikova, O.; Naumov, N.; Sergienko, V.; Kostylev, V.

    2017-01-01

    Positron emission tomography is the most promising technology to monitor cancer and heart disease treatment. Stationary PET center requires substantial financial resources and time for construction and equipping. The developed mobile solution will allow introducing PET technology quickly without major investments.

  13. User-Centered Design through Learner-Centered Instruction

    Science.gov (United States)

    Altay, Burçak

    2014-01-01

    This article initially demonstrates the parallels between the learner-centered approach in education and the user-centered approach in design disciplines. Afterward, a course on human factors that applies learner-centered methods to teach user-centered design is introduced. The focus is on three tasks to identify the application of theoretical and…

  14. PTSD: National Center for PTSD

    Medline Plus

    Full Text Available ... Clinics Vet Centers Regional Benefits Offices Regional Loan Centers Cemetery Locations Search Enter your search text Button to start ... Clinics Vet Centers Regional Benefits Offices Regional Loan Centers Cemetery Locations Contact Us FAQs Ask a Question Toll Free ...

  15. Entanglement with Centers

    CERN Document Server

    Ma, Chen-Te

    2015-01-01

    Entanglement is a physical phenomenon that each state cannot be described individually. Entanglement entropy gives quantitative understanding to the entanglement. We use decomposition of the Hilbert space to discuss properties of the entanglement. Therefore, partial trace operator becomes important to define the reduced density matrix from different centers, which commutes with all elements in the Hilbert space, corresponding to different entanglement choices or different observations on entangling surface. Entanglement entropy is expected to satisfy the strong subadditivity. We discuss decomposition of the Hilbert space for the strong subadditivity and other related inequalities. The entanglement entropy with centers can be computed from the Hamitonian formulations systematically, provided that we know wavefunctional. In the Hamitonian formulation, it is easier to obtain symmetry structure. We consider massless $p$-form theory as an example. The massless $p$-form theory in ($2p+2)$-dimensions has global symm...

  16. COMPUTATIONAL SCIENCE CENTER

    Energy Technology Data Exchange (ETDEWEB)

    DAVENPORT,J.

    2004-11-01

    The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security.

  17. NRH Neuroscience Research Center

    Science.gov (United States)

    2008-06-01

    PRESENTATIONS: INTERNATIONAL STUDIES Stroke Rehabilitation Practice in New Zealand and the U.S. (H McNaughton) The European CERISE Study (K Putman ) A...National Rehabilitation Hospital on September 8-9, 2007 • Our project team hosted, Koen Putman , PT, PhD, a Fulbright Scholar from the Free...University of Brussels. The Project team worked with Dr. Putman in merging his data from a 5-center (from 4 countries) stroke rehabilitation outcomes study

  18. Home media center

    OpenAIRE

    Valverde Martinez, Juan Miguel

    2014-01-01

    One of the most popular indoor entertainment systems nowadays is related to playing multimedia files, not only at home but also in public events such as watching movies in cinemas or playing music in nightclubs or pubs. Developers are responsible for making this easier and innovating in the development of these systems in different ways. This project's goal was to develop a home media center which allows the user to play multimedia files easily. In addition, the development was intended t...

  19. National Biocontainment Training Center

    Science.gov (United States)

    2016-10-01

    reporting period. Both BSL3 and BSL4 laboratories are housed in this new multi-story facility located on the University of Melbourne campus. The...BRL supports research programs in the National Center for Biodefense and Infectious Diseases, and the BRL is equipped with facilities to house ...Lower Rhine in The Netherlands. Doherty Institute – Melbourne , Australia – Miguel Grimaldo continues his consultations with the Peter Doherty

  20. Design of charging model using supercomputing CAE cloud platform of user feedback mechanism%基于用户反馈机制的超级计算CAE云平台计费模型设计

    Institute of Scientific and Technical Information of China (English)

    马亿旿; 池鹏; 陈磊; 梁小林; 蔡立军

    2015-01-01

    As the traditional charging model of CAE cloud platform has many shortcomings, such as user behavior and feedback are not considered, the single charging model can not support differentiated services, and it has poor business flexibility, a charging model was proposed based on plug-in in the supercomputer CAE cloud platform and a charging algorithm was put forward based on user feedback mechanism. The plug-in accounting model regards service as a basic unit, and provides different charging solutions for user’s service by a form of plug-in unit, which makes it easy to solve those problems, and to some extent, it strengthens the characteristic of the strong business dynamics of supercomputer CAE cloud platform. The charging algorithm can dynamically adjust the user's charging parameters according to the historical behavior and feedback of user mechanism, and reduce service costs by the activity and the importance of user, which enhances the quality of services and user experience.%针对传统 CAE 云平台中计费算法未考虑用户行为与反馈等缺陷以及传统计费模型的模式单一、无法支撑差异化服务、业务灵活性差等缺点,建立一种插件式的超级计算 CAE 云平台计费模型,提出一种基于用户反馈机制的计费算法。插件式计费模型以服务为基本单位,通过插件的形式为用户的服务提供不同的计费方案,从而解决了传统计费模型的模式单一、灵活性差等缺陷,增强超级计算 CAE 云平台的业务动态性。基于用户反馈的计费算法能够根据用户的历史行为和反馈情况,动态调整用户的计费参数,实现了根据用户的活跃度和重要性来减少服务费用的目的,保证了服务质量,提升了用户体验。

  1. UC Merced Center for Computational Biology Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Colvin, Michael; Watanabe, Masakatsu

    2010-11-30

    made possible by the CCB from its inception until August, 2010, at the end of the final extension. Although DOE support for the center ended in August 2010, the CCB will continue to exist and support its original objectives. The research and academic programs fostered by the CCB have led to additional extramural funding from other agencies, and we anticipate that CCB will continue to provide support for quantitative and computational biology program at UC Merced for many years to come. Since its inception in fall 2004, CCB research projects have continuously had a multi-institutional collaboration with Lawrence Livermore National Laboratory (LLNL), and the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, as well as individual collaborators at other sites. CCB affiliated faculty cover a broad range of computational and mathematical research including molecular modeling, cell biology, applied math, evolutional biology, bioinformatics, etc. The CCB sponsored the first distinguished speaker series at UC Merced, which had an important role is spreading the word about the computational biology emphasis at this new campus. One of CCB's original goals is to help train a new generation of biologists who bridge the gap between the computational and life sciences. To archive this goal, by summer 2006, a new program - summer undergraduate internship program, have been established under CCB to train the highly mathematical and computationally intensive Biological Science researchers. By the end of summer 2010, 44 undergraduate students had gone through this program. Out of those participants, 11 students have been admitted to graduate schools and 10 more students are interested in pursuing graduate studies in the sciences. The center is also continuing to facilitate the development and dissemination of undergraduate and graduate course materials based on the latest research in computational biology.

  2. Solar Technology Center

    Energy Technology Data Exchange (ETDEWEB)

    Boehm, Bob

    2011-04-27

    The Department of Energy, Golden Field Office, awarded a grant to the UNLV Research Foundation (UNLVRF) on August 1, 2005 to develop a solar and renewable energy information center. The Solar Technology Center (STC) is to be developed in two phases, with Phase I consisting of all activities necessary to determine feasibility of the project, including design and engineering, identification of land access issues and permitting necessary to determine project viability without permanently disturbing the project site, and completion of a National Environmental Policy Act (NEPA) Environmental Assessment. Phase II is the installation of infrastructure and related structures, which leads to commencement of operations of the STC. The STC is located in the Boulder City designated 3,000-acre Eldorado Valley Energy Zone, approximately 15 miles southwest of downtown Boulder City and fronting on Eldorado Valley Drive. The 33-acre vacant parcel has been leased to the Nevada Test Site Development Corporation (NTSDC) by Boulder City to accommodate a planned facility that will be synergistic with present and planned energy projects in the Zone. The parcel will be developed by the UNLVRF. The NTSDC is the economic development arm of the UNLVRF. UNLVRF will be the entity responsible for overseeing the lease and the development project to assure compliance with the lease stipulations established by Boulder City. The STC will be operated and maintained by University of Nevada, Las Vegas (UNLV) and its Center for Energy Research (UNLV-CER). Land parcels in the Eldorado Valley Energy Zone near the 33-acre lease are committed to the construction and operation of an electrical grid connected solar energy production facility. Other projects supporting renewable and solar technologies have been developed within the energy zone, with several more developments in the horizon.

  3. Center for Dielectric Studies,

    Science.gov (United States)

    1984-05-01

    Industry representatives at Penn State to plan in more detail the structuring of the Center, and the qualifications and oonditions for Industrial ...composition: a slight weight gain was observed with the _ n,= ". tie Office of Naval Research under Contrut No. q0o0w4- PMN-2% MN composition. The sintered...0 #4 #4 #4 * .2 #4 .. -S o - S I~ N- * - S U. USa S USUOOOOOO 00000 U ~.* ~~23S~.S 30!!! a * 6 #4 #4 #4 ~ " 0 #4 ~ #4 #4 #4 #4 #4 #4 #4 S N * U - #4

  4. COMPUTATIONAL SCIENCE CENTER

    Energy Technology Data Exchange (ETDEWEB)

    DAVENPORT, J.

    2005-11-01

    The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

  5. Memorial Center - Milwaukee

    Directory of Open Access Journals (Sweden)

    Saarinen, Eero

    1959-12-01

    Full Text Available El Memorial Center de Milwaukee ha sido erigido en la parte más alta de la ciudad, coronando una enorme colina que domina ampliamente el conjunto urbano y un hermoso lago. Su emplazamiento, al final de un puente de grandes dimensiones, exigía que fuese tratado en armonía con éste, habiéndose adoptado por ello el sistema de colocar la edificación reposando sobre una planta totalmente diáfana que deja vista la estructura, cuyos esbeltos soportes dan sensación de monumentalidad.

  6. Phenomenological three center model

    CERN Document Server

    Poenaru, D N; Gherghescu, R A; Nagame, Y; Hamilton, J H; Ramayya, A V

    2001-01-01

    Experimental results on ternary fission of sup 2 sup 5 sup 2 Cf suggest the existence of a short-lived quasi-molecular state. We present a three-center phenomenological model able to explain such a state by producing a new minimum in the deformation energy at a separation distance very close to the touching point. The shape parametrization chosen by us allows to describe the essential geometry of the systems in terms of one independent coordinate, namely, the distance between the heavy fragment centers. The shell correction (also treated phenomenologically) only produces quantitative effects; qualitatively it is not essential for the new minimum. Half-lives of some quasi-molecular states which could be formed in sup 1 sup 0 B accompanied fission of sup 2 sup 3 sup 6 U, sup 2 sup 3 sup 6 Pu, sup 2 sup 4 sup 6 Cm, sup 2 sup 5 sup 2 Cf, sup 2 sup 5 sup 2 sup , sup 2 sup 5 sup 6 Fm, sup 2 sup 5 sup 6 sup , sup 2 sup 6 sup 0 No, and sup 2 sup 6 sup 2 Rf are roughly estimated. (authors)

  7. Sustainable Biofuels Development Center

    Energy Technology Data Exchange (ETDEWEB)

    Reardon, Kenneth F. [Colorado State Univ., Fort Collins, CO (United States)

    2015-03-01

    The mission of the Sustainable Bioenergy Development Center (SBDC) is to enhance the capability of America’s bioenergy industry to produce transportation fuels and chemical feedstocks on a large scale, with significant energy yields, at competitive cost, through sustainable production techniques. Research within the SBDC is organized in five areas: (1) Development of Sustainable Crops and Agricultural Strategies, (2) Improvement of Biomass Processing Technologies, (3) Biofuel Characterization and Engine Adaptation, (4) Production of Byproducts for Sustainable Biorefining, and (5) Sustainability Assessment, including evaluation of the ecosystem/climate change implication of center research and evaluation of the policy implications of widespread production and utilization of bioenergy. The overall goal of this project is to develop new sustainable bioenergy-related technologies. To achieve that goal, three specific activities were supported with DOE funds: bioenergy-related research initiation projects, bioenergy research and education via support of undergraduate and graduate students, and Research Support Activities (equipment purchases, travel to attend bioenergy conferences, and seminars). Numerous research findings in diverse fields related to bioenergy were produced from these activities and are summarized in this report.

  8. Muenster Karstadt shopping center

    Energy Technology Data Exchange (ETDEWEB)

    1987-08-01

    Displaying its goods on an area of 16000 m/sup 2/ the five-story Karstadt shopping center is completed by a restaurant, an underground garage, administrative offices, personnel recreation rooms, and depot repair and treatment shops. Photographs showing the building, interior fittings, and supply systems, as well as plans and diagrams facilitate access to the building structure (ceilings, outer and inner walls) and building services systems. The figures and data presented refer to the structure and performance of heating systems (central units, district heating connections, pumped water heating systems), space heating (supply air-dependent control), decentralized ventilation systems (supply air/extracted air), large-scale refrigerating machinery (piston and turbo compressors), small-scale refrigerating machinery (cold storage rooms, refrigerators), sprinklers (3900 spray nozzles, water supply), sanitary systems (sewerage), power system (10 kV, transformers, 1.v. main distribution, emergency generating units, emergency lighting batteries), elevators (3 goods elevators, 2 passenger elevators), electric stairways (12 staggered escalators), and building services systems (telephone office, control center). (HWJ).

  9. Data center coolant switch

    Energy Technology Data Exchange (ETDEWEB)

    Iyengar, Madhusudan K.; Parida, Pritish R.; Schultz, Mark D.

    2015-10-06

    A data center cooling system is operated in a first mode; it has an indoor portion wherein heat is absorbed from components in the data center, and an outdoor heat exchanger portion wherein outside air is used to cool a first heat transfer fluid (e.g., water) present in at least the outdoor heat exchanger portion of the cooling system during the first mode. The first heat transfer fluid is a relatively high performance heat transfer fluid (as compared to the second fluid), and has a first heat transfer fluid freezing point. A determination is made that an appropriate time has been reached to switch from the first mode to a second mode. Based on this determination, the outdoor heat exchanger portion of the data cooling system is switched to a second heat transfer fluid, which is a relatively low performance heat transfer fluid, as compared to the first heat transfer fluid. It has a second heat transfer fluid freezing point lower than the first heat transfer fluid freezing point, and the second heat transfer fluid freezing point is sufficiently low to operate without freezing when the outdoor air temperature drops below a first predetermined relationship with the first heat transfer fluid freezing point.

  10. PTSD: National Center for PTSD

    Medline Plus

    Full Text Available ... and Media Research Topics For Veterans For Researchers Research Oversight Special Groups Caregivers Combat Veterans & their Families Readjustment Counseling (Vet Centers) War Related Illness & Injury Study Center Homeless Veterans Returning Service Members Rural Veterans Seniors & Aging ...

  11. Center for Veterinary Medicine (CVM)

    Data.gov (United States)

    Federal Laboratory Consortium — As seen on the center's logo, the mission statement for FDA's Center for Veterinary Medicine (CVM) reads: "Protecting Human and Animal Health." To achieve this broad...

  12. PTSD: National Center for PTSD

    Medline Plus

    Full Text Available ... a Question Toll Free Numbers Media Contact Locator Hospitals and Clinics Vet Centers Regional Benefits Offices Regional ... TEE) Tournament Wheelchair Games Winter Sports Clinic Locations Hospitals & Clinics Vet Centers Regional Benefits Offices Regional Loan ...

  13. Daugherty Memorial Assessment Center (DMAC)

    Data.gov (United States)

    Federal Laboratory Consortium — Daugherty Memorial Assessment Center (DMAC) is a 39,000-square-foot facility that doubles the warfare center's high-secured performance assessment capabilities. DMAC...

  14. PTSD: National Center for PTSD

    Medline Plus

    Full Text Available ... Topics For Veterans For Researchers Research Oversight Special Groups Caregivers Combat Veterans & their Families Readjustment Counseling (Vet Centers) War Related Illness & Injury Study Center Homeless Veterans Returning Service Members Rural Veterans ...

  15. PTSD: National Center for PTSD

    Medline Plus

    Full Text Available ... Community Providers and Clergy Co-Occurring Conditions Continuing Education Publications List of Center Publications Articles by Center ... Type List of Materials By Type Assessments Continuing Education Handouts Manuals Mobile Apps Publications Toolkits Videos Web ...

  16. Italy INAF Data Center Report

    Science.gov (United States)

    Negusini, M.; Sarti, P.

    2013-01-01

    This report summarizes the activities of the Italian INAF VLBI Data Center. Our Data Center is located in Bologna, Italy and belongs to the Institute of Radioastronomy, which is part of the National Institute of Astrophysics.

  17. Center for Beam Physics, 1992

    Energy Technology Data Exchange (ETDEWEB)

    1993-06-01

    This report contains the following information on the center for beam physics: Facilities; Organizational Chart; Roster; Profiles of Staff; Affiliates; Center Publications (1991--1993); and 1992 Summary of Activities.

  18. National Center for Biotechnology Information

    Science.gov (United States)

    ... to NCBI Sign Out NCBI National Center for Biotechnology Information Search database All Databases Assembly Biocollections BioProject ... Search Welcome to NCBI The National Center for Biotechnology Information advances science and health by providing access ...

  19. Center for Prostate Disease Research

    Data.gov (United States)

    Federal Laboratory Consortium — The Center for Prostate Disease Research is the only free-standing prostate cancer research center in the U.S. This 20,000 square foot state-of-the-art basic science...

  20. Contact Center Manager Administration (CCMA)

    Data.gov (United States)

    Social Security Administration — CCMA is the server that provides a browser-based tool for contact center administrators and supervisors. It is used to manage and configure contact center resources...

  1. National Center on Family Homelessness

    Science.gov (United States)

    ... You are here Home National Center on Family Homelessness Center A staggering 2.5 million children are ... raise awareness of the current state of child homelessness in the United States, documents the number of ...

  2. VT Designated Village Centers Boundary

    Data.gov (United States)

    Vermont Center for Geographic Information — This community revitalization program helps maintain or evolve small to medium-sized historic centers with existing civic and commercial buildings. The designation...

  3. PTSD: National Center for PTSD

    Medline Plus

    Full Text Available ... Community Providers and Clergy Co-Occurring Conditions Continuing Education Publications List of Center Publications Articles by Center ... Type List of Materials By Type Assessments Continuing Education Handouts Manuals Mobile Apps Publications Toolkits Videos Web ...

  4. On chains of centered valuations

    Directory of Open Access Journals (Sweden)

    Rachid Chibloun

    2003-01-01

    Full Text Available We study chains of centered valuations of a domain A and chains of centered valuations of A [X1,…,Xn] corresponding to valuations of A. Finally, we make some applications to chains of valuations centered on the same ideal of A [X1,…,Xn] and extending the same valuation of A.

  5. CURRICULUM GUIDE, CHILD CARE CENTERS.

    Science.gov (United States)

    California State Dept. of Education, Sacramento.

    CALIFORNIA CHILD CARE CENTERS WERE ESTABLISHED IN 1943 TO SUPPLY SERVICES TO CHILDREN OF WORKING MOTHERS. THE CHILD CARE PROGRAM PROVIDES, WITHIN NURSERY AND SCHOOLAGE CENTERS, CARE AND EDUCATIONAL SUPERVISION FOR PRESCHOOL AND ELEMENTARY SCHOOL AGE CHILDREN. THE PHILOSOPHY OF THE CHILD CENTER PROGRAM IS BASED UPON THE BELIEF THAT EACH CHILD…

  6. Mi Pueblo Food Center

    Directory of Open Access Journals (Sweden)

    Alexis A. Babb

    2016-04-01

    Full Text Available This case describes a current growth opportunity for Mi Pueblo Food Center, a Hispanic grocery chain with locations throughout the Bay Area, California. The CEO of Mi Pueblo is contemplating opening a new store location in East Palo Alto, CA, which has been without a local, full-service grocery store for over 20 years. Case objectives are for students to develop an understanding of how the grocery industry operates, the risks and opportunities associated with opening a new grocery store location, and the impact on social, environmental, and economic sustainability. The SWOT (Strengths, Weaknesses, Opportunities, Threats framework is used to analyze whether or not it is feasible for Mi Pueblo to open a new location in East Palo Alto. This case may be used with students in graduate and advanced undergraduate courses.

  7. Concurrent engineering research center

    Science.gov (United States)

    Callahan, John R.

    1995-01-01

    The projects undertaken by The Concurrent Engineering Research Center (CERC) at West Virginia University are reported and summarized. CERC's participation in the Department of Defense's Defense Advanced Research Project relating to technology needed to improve the product development process is described, particularly in the area of advanced weapon systems. The efforts committed to improving collaboration among the diverse and distributed health care providers are reported, along with the research activities for NASA in Independent Software Verification and Validation. CERC also takes part in the electronic respirator certification initiated by The National Institute for Occupational Safety and Health, as well as in the efforts to find a solution to the problem of producing environment-friendly end-products for product developers worldwide. The 3M Fiber Metal Matrix Composite Model Factory Program is discussed. CERC technologies, facilities,and personnel-related issues are described, along with its library and technical services and recent publications.

  8. Citizen centered design

    Directory of Open Access Journals (Sweden)

    Ingrid Mulder

    2015-11-01

    Full Text Available Today architecture has to design for rapidly changing futures, in a citizen-centered way. That is, architecture needs to embrace meaningful design. Societal challenges ask for a new paradigm in city-making, which combines top-down public management with bottom-up social innovation to reach meaningful design. The biggest challenge is indeed to embrace a new collaborative attitude, a participatory approach, and to have the proper infrastructure that supports this social fabric. Participatory design and transition management are future-oriented, address people and institutions. Only through understanding people in context and the corresponding dynamics, one is able to design for liveable and sustainable urban environments, embracing the human scale.

  9. Supernova Science Center

    Energy Technology Data Exchange (ETDEWEB)

    S. E. Woosley

    2008-05-05

    The Supernova Science Center (SNSC) was founded in 2001 to carry out theoretical and computational research leading to a better understanding of supernovae and related transients. The SNSC, a four-institutional collaboration, included scientists from LANL, LLNL, the University of Arizona (UA), and the University of California at Santa Cruz (UCSC). Intitially, the SNSC was funded for three years of operation, but in 2004 an opportunity was provided to submit a renewal proposal for two years. That proposal was funded and subsequently, at UCSC, a one year no-cost extension was granted. The total operational time of the SNSC was thus July 15, 2001 - July 15, 2007. This document summarizes the research and findings of the SNSC and provides a cummulative publication list.

  10. Industrial Assessment Center Program

    Energy Technology Data Exchange (ETDEWEB)

    Dr. Dereje Agonafer

    2007-11-30

    The work described in this report was performed under the direction of the Industrial Assessment Center (IAC) at University of Texas at Arlington. The IAC at The University of Texas at Arlington is managed by Rutgers University under agreement with the United States Department of Energy Office of Industrial Technology, which financially supports the program. The objective of the IAC is to identify, evaluate, and recommend, through analysis of an industrial plant’s operations, opportunities to conserve energy and prevent pollution, thereby reducing the associated costs. IAC team members visit and survey the plant. Based upon observations made in the plant, preventive/corrective actions are recommended. At all times we try to offer specific and quantitative recommendations of cost savings, energy conservation, and pollution prevention to the plants we serve.

  11. The Center is Everywhere

    CERN Document Server

    Weinberg, David H

    2012-01-01

    "The Center is Everywhere" is a sculpture by Josiah McElheny, currently (through October 14, 2012) on exhibit at the Institute of Contemporary Art, Boston. The sculpture is based on data from the Sloan Digital Sky Survey (SDSS), using hundreds of glass crystals and lamps suspended from brass rods to represent the three-dimensional structure mapped by the SDSS through one of its 2000+ spectroscopic plugplates. This article describes the scientific ideas behind this sculpture, emphasizing the principle of the statistical homogeneity of cosmic structure in the presence of local complexity. The title of the sculpture is inspired by the work of the French revolutionary Louis Auguste Blanqui, whose 1872 book "Eternity Through The Stars: An Astronomical Hypothesis" was the first to raise the spectre of the infinite replicas expected in an infinite, statistically homogeneous universe. Puzzles of infinities, probabilities, and replicas continue to haunt modern fiction and contemporary discussions of inflationary cosmo...

  12. Regional Warning Center Sweden

    Science.gov (United States)

    Lundstedt, Henrik

    RWC-Sweden is operated by the Lund division of the Swedish Institute of Space Physics located at IDEON, a Science Research Technology Park. The Institute of Technology of Lund and Lund University are just adjacent to IDEON. This creates a lot of synergy effects. Copenhagen, with the Danish National Space Center DNSC), and Atmosphere Space Research Division of Danish Meteorological Institute (DMI), is 45 min away via the bridge. The new LOIS Space Centre is located two hours away by car, north of Lund and just outside V¨xj¨. The IRF Lund a o division is aiming at becoming a "Solar and Space Weather Center". We focus on solar magnetic activity, its influence on climate and on space weather effects such the effect of geomagnetically induced currents (GIC). Basic research: A PostDoc position on "Solar Magnetic Activity: Topology and Predictions has recently been created. Research is carried on to improve predictions of solar magnetic activity. Preparations for using upcoming SDO vector magnetic fields are ongoing. Predictions: RWC-Sweden offers real-time forecasts of space weather and space weather effects based on neural networks. We participated in the NASA/NOAA Cycle 24 Prediction Panel. We have also participated in several ESA/EU solar-climate projects New observation facilities: Distributed, wide-area radio facility (LOIS) for solar (and other space physics) observations and a guest prof: Radio facility about 200 km distant, outside V¨xj¨ (Sm˚ a o aland), in Ronneby (Blekinge) and Lund (Sk˚ ane) is planned to be used for tracking of CMEs and basic solar physics studies of the corona. The LOIS station outside V¨xj¨ has a o been up and running for the past three years. Bo Thidé has joined the Lund division e as a guest prof. A new magnetometer at Risinge LOIS station has been installed an calibrated and expected to be operational in March, 2008.

  13. Space Operations Learning Center

    Science.gov (United States)

    Lui, Ben; Milner, Barbara; Binebrink, Dan; Kuok, Heng

    2012-01-01

    The Space Operations Learning Center (SOLC) is a tool that provides an online learning environment where students can learn science, technology, engineering, and mathematics (STEM) through a series of training modules. SOLC is also an effective media for NASA to showcase its contributions to the general public. SOLC is a Web-based environment with a learning platform for students to understand STEM through interactive modules in various engineering topics. SOLC is unique in its approach to develop learning materials to teach schoolaged students the basic concepts of space operations. SOLC utilizes the latest Web and software technologies to present this educational content in a fun and engaging way for all grade levels. SOLC uses animations, streaming video, cartoon characters, audio narration, interactive games and more to deliver educational concepts. The Web portal organizes all of these training modules in an easily accessible way for visitors worldwide. SOLC provides multiple training modules on various topics. At the time of this reporting, seven modules have been developed: Space Communication, Flight Dynamics, Information Processing, Mission Operations, Kids Zone 1, Kids Zone 2, and Save The Forest. For the first four modules, each contains three components: Flight Training, Flight License, and Fly It! Kids Zone 1 and 2 include a number of educational videos and games designed specifically for grades K-6. Save The Forest is a space operations mission with four simulations and activities to complete, optimized for new touch screen technology. The Kids Zone 1 module has recently been ported to Facebook to attract wider audience.

  14. "0" and "1" of Supercomputer: Design of Yinhe Building in the University of Defense Technology%超级计算机的"0"与"1"——国防科技大学银河楼设计

    Institute of Scientific and Technical Information of China (English)

    宋明星; 魏春雨; 尹佳斌

    2011-01-01

    通过对国防科技大学银河楼设计创作中规划布局、空间构成、造型、适宜生态技术处理的分析,阐释了建筑设计手法与超级巨型机研究测试中心功能间的联系与思考.规划布局考虑了主体机房与南、北楼的衔接关系,通过中庭、花园、观光电梯等空间语汇进行了空间的构成,造型手法通过简洁的立柱与玻璃的对比反映计算机语言的0与1,同时在设计中多个平台采用了植被屋顶这一适宜生态技术.%Through the analysis of the planning, spatial composition, modeling and ecological technology of the design of Yinhe Building, this article analyzes the relationship between the architectural design and the demands of the supercomputer labs. The planning focuses on the relationship between the main computer room and the north and the south buildings. The spatial composition is achieved with architectural vocabularies such as atriums, gardens and panoramic lifts, etc. Modeling of the buildings reflects the computer languages: 0 and 1 through the contrast between the column and the glass. And appropriate ecological technology of green roofs is used on several decks.

  15. Carbon Dioxide Information Analysis Center and World Data Center for Atmospheric Trace Gases Fiscal Year 1999 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Cushman, R.M.

    2000-03-31

    The Carbon Dioxide Information Analysis Center (CDIAC), which includes the World Data Center (WDC) for Atmospheric Trace Gases, is the primary global-change data and information analysis center of the Department of Energy (DOE). More than just an archive of data sets and publications, CDIAC has--since its inception in 1982--enhanced the value of its holdings through intensive quality assurance, documentation, and integration. Whereas many traditional data centers are discipline-based (for example, meteorology or oceanography), CDIAC's scope includes potentially anything and everything that would be of value to users concerned with the greenhouse effect and global climate change, including concentrations of carbon dioxide (CO{sub 2}) and other radiatively active gases in the atmosphere; the role of the terrestrial biosphere and the oceans in the biogeochemical cycles of greenhouse gases; emissions of CO{sub 2} and other trace gases to the atmosphere; long-term climate trends; the effects of elevated CO{sub 2} on vegetation; and the vulnerability of coastal areas to rising sea level. CDIAC is located within the Environmental Sciences Division (ESD) at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee. CDIAC is co-located with ESD researchers investigating global-change topics, such as the global carbon cycle and the effects of carbon dioxide on vegetation. CDIAC staff are also connected with current ORNL research on related topics, such as renewable energy and supercomputing technologies. CDIAC is supported by the Environmental Sciences Division (Jerry Elwood, Acting Director) of DOE's Office of Biological and Environmental Research. CDIAC's FY 1999 budget was 2.2M dollars. CDIAC represents the DOE in the multi-agency Global Change Data and Information System. Bobbi Parra, and Wanda Ferrell on an interim basis, is DOE's Program Manager with responsibility for CDIAC. CDIAC comprises three groups, Global Change Data, Computer Systems, and

  16. Carbon Dioxide Information Analysis Center and World Data Center for Atmospheric Trace Gases Fiscal Year 2001 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Cushman, R.M.

    2002-10-15

    The Carbon Dioxide Information Analysis Center (CDIAC), which includes the World Data Center (WDC) for Atmospheric Trace Gases, is the primary global change data and information analysis center of the U.S. Department of Energy (DOE). More than just an archive of data sets and publications, CDIAC has, since its inception in 1982, enhanced the value of its holdings through intensive quality assurance, documentation, and integration. Whereas many traditional data centers are discipline-based (for example, meteorology or oceanography), CDIAC's scope includes potentially anything and everything that would be of value to users concerned with the greenhouse effect and global climate change, including concentrations of carbon dioxide (CO{sub 2}) and other radiatively active gases in the atmosphere; the role of the terrestrial biosphere and the oceans in the biogeochemical cycles of greenhouse gases; emissions of CO{sub 2} and other trace gases to the atmosphere; long-term climate trends; the effects of elevated CO{sub 2} on vegetation; and the vulnerability of coastal areas to rising sea levels. CDIAC is located within the Environmental Sciences Division (ESD) at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee. CDIAC is co-located with ESD researchers investigating global-change topics, such as the global carbon cycle and the effects of carbon dioxide on climate and vegetation. CDIAC staff are also connected with current ORNL research on related topics, such as renewable energy and supercomputing technologies. CDIAC is supported by the Environmental Sciences Division (Jerry Elwood, Director) of DOE's Office of Biological and Environmental Research. CDIAC represents DOE in the multi-agency Global Change Data and Information System (GCDIS). Wanda Ferrell is DOE's Program Manager with overall responsibility for CDIAC. Roger Dahlman is responsible for CDIAC's AmeriFlux tasks, and Anna Palmisano for CDIAC's Ocean Data tasks. CDIAC is made

  17. Satellite medical centers project

    Science.gov (United States)

    Aggarwal, Arvind

    2002-08-01

    World class health care for common man at low affordable cost: anywhere, anytime The project envisages to set up a national network of satellite Medical centers. Each SMC would be manned by doctors, nurses and technicians, six doctors, six nurses, six technicians would be required to provide 24 hour cover, each SMC would operate 24 hours x 7 days. It would be equipped with the Digital telemedicine devices for capturing clinical patient information and investigations in the form of voice, images and data and create an audiovisual text file - a virtual Digital patient. Through the broad band connectivity the virtual patient can be sent to the central hub, manned by specialists, specialists from several specialists sitting together can view the virtual patient and provide a specialized opinion, they can see the virtual patient, see the examination on line through video conference or even PCs, talk to the patient and the doctor at the SMC and controlle capturing of information during examination and investigations of the patient at the SMC - thus creating a virtual Digital consultant at the SMC. Central hub shall be connected to the doctors and consultants in remote locations or tertiary care hospitals any where in the world, thus creating a virtual hub the hierarchical system shall provide upgradation of knowledge to thedoctors in central hub and smc and thus continued medical education and benefit the patient thru the world class treatment in the smc located at his door step. SMC shall be set up by franchisee who shall get safe business opportunity with high returns, patients shall get Low cost user friendly worldclass health care anywhere anytime, Doctors can get better meaningful selfemplyment with better earnings, flexibility of working time and place. SMC shall provide a wide variety of services from primary care to world class Global consultation for difficult patients.

  18. PTSD: National Center for PTSD

    Medline Plus

    Full Text Available ... Information Disability Compensation Pension GI Bill ® Vocational Rehabilitation & Employment eBenefits Employment Center Dependents' Educational Assistance Survivor Benefits ...

  19. Center for Rehabilitation Sciences Research

    Data.gov (United States)

    Federal Laboratory Consortium — The Center for Rehabilitation Sciences Research (CRSR) was established as a research organization to promote successful return to duty and community reintegration of...

  20. PTSD: National Center for PTSD

    Medline Plus

    Full Text Available ... Bill ® Vocational Rehabilitation & Employment eBenefits Employment Center Dependents' Educational ... Trauma Assessment Assessment Overview Adult Interviews Adult Self Report Child ...

  1. Center for Neuroscience & Regenerative Medicine

    Data.gov (United States)

    Federal Laboratory Consortium — The Center for Neuroscience and Regenerative Medicine (CNRM) was established as a collaborative intramural federal program involving the U.S. Department of Defense...

  2. Energetics Manufacturing Technology Center (EMTC)

    Data.gov (United States)

    Federal Laboratory Consortium — The Energetics Manufacturing Technology Center (EMTC), established in 1994 by the Office of Naval Research (ONR) Manufacturing Technology (ManTech) Program, is Navy...

  3. PTSD: National Center for PTSD

    Medline Plus

    Full Text Available ... Benefits Information Disability Compensation Pension GI Bill ® Vocational Rehabilitation & Employment eBenefits Employment Center Dependents' Educational Assistance Survivor Benefits Home Loans Life ...

  4. Center for Environmental Health Sciences

    Data.gov (United States)

    Federal Laboratory Consortium — The primary research objective of the Center for Environmental Health Sciences (CEHS) at the University of Montana is to advance knowledge of environmental impacts...

  5. Consolidated Copayment Processing Center (CCPC)

    Data.gov (United States)

    Department of Veterans Affairs — The Consolidated Copayment Processing Center (CCPC) database contains Veteran patient contact and billing information in order to support the printing and mailing of...

  6. PTSD: National Center for PTSD

    Medline Plus

    Full Text Available ... Budget, & Performance VA Center for Innovation (VACI) Agency Financial Report (AFR) Budget Submission Recovery Act Resources Business Congressional Affairs Jobs Benefits Booklet Data & Statistics VA ...

  7. The Center for Momentum Transport and Flow Organization in Plasmas - Final Scientific Report

    Energy Technology Data Exchange (ETDEWEB)

    Munsat, Tobin [Univ. of Colorado, Boulder, CO (United States)

    2015-12-14

    Overview of University of Colorado Efforts: The University of Colorado group has focused on two primary fronts during the grant period: development of a variety of multi-point diagnostic and/or imaging analysis techniques, and momentum-transport related experiments on a variety of devices (NSTX at PPPL, CSDX at UCSD, LAPD at UCLA, DIII-D at GA). Experimental work has taken advantage of several diagnostic instruments, including fast-framing cameras for imaging of electron density fluctuations (either directly or using injected gas puffs), ECEI for imaging of electron temperature fluctuations, and multi-tipped Langmuir and magnetic probes for corroborating measurements of Reynolds and Maxwell stresses. Mode Characterization in CSDX: We have performed a series of experiments at the CSDX linear device at UCSD, in collaboration with Center PI G. Tynan's group. The experiments included a detailed study of velocity estimation techniques, including direct comparisons between Langmuir probes and image-based velocimetry from fast-framing camera data. We used the camera data in a second set of studies to identify the spatial and spectral structure of coherent modes, which illuminates wave behavior to a level of detail previously unavailable, and enables direct comparison of dispersion curves to theoretical estimates. In another CSDX study, similar techniques were used to demonstrate a controlled transition from nonlinearly coupled discrete eigenmodes to fully developed broadband turbulence. The axial magnetic field was varied from 40-240 mT, which drove the transition. At low magnetic fields, the plasma is dominated by drift waves. As the magnetic field is increased, a strong potential gradient at the edge introduces an ExB shear-driven instability. At the transition, another mode with signatures of a rotation-induced Rayleigh–Taylor instability appears at the central plasma region. Concurrently, large axial velocities were found in the plasma core. For larger magnetic

  8. The Center for Momentum Transport and Flow Organization in Plasmas - Final Scientific Report

    Energy Technology Data Exchange (ETDEWEB)

    Munsat, Tobin [Univ. of Colorado, Boulder, CO (United States)

    2015-12-14

    Overview of University of Colorado Efforts: The University of Colorado group has focused on two primary fronts during the grant period: development of a variety of multi-point diagnostic and/or imaging analysis techniques, and momentum-transport related experiments on a variety of devices (NSTX at PPPL, CSDX at UCSD, LAPD at UCLA, DIII-D at GA). Experimental work has taken advantage of several diagnostic instruments, including fast-framing cameras for imaging of electron density fluctuations (either directly or using injected gas puffs), ECEI for imaging of electron temperature fluctuations, and multi-tipped Langmuir and magnetic probes for corroborating measurements of Reynolds and Maxwell stresses. Mode Characterization in CSDX: We have performed a series of experiments at the CSDX linear device at UCSD, in collaboration with Center PI G. Tynan's group. The experiments included a detailed study of velocity estimation techniques, including direct comparisons between Langmuir probes and image-based velocimetry from fast-framing camera data. We used the camera data in a second set of studies to identify the spatial and spectral structure of coherent modes, which illuminates wave behavior to a level of detail previously unavailable, and enables direct comparison of dispersion curves to theoretical estimates. In another CSDX study, similar techniques were used to demonstrate a controlled transition from nonlinearly coupled discrete eigenmodes to fully developed broadband turbulence. The axial magnetic field was varied from 40-240 mT, which drove the transition. At low magnetic fields, the plasma is dominated by drift waves. As the magnetic field is increased, a strong potential gradient at the edge introduces an ExB shear-driven instability. At the transition, another mode with signatures of a rotation-induced Rayleigh–Taylor instability appears at the central plasma region. Concurrently, large axial velocities were found in the plasma core. For larger magnetic

  9. Clean Energy Application Center

    Energy Technology Data Exchange (ETDEWEB)

    Freihaut, Jim

    2013-09-30

    The Mid Atlantic Clean Energy Application Center (MACEAC), managed by The Penn State College of Engineering, serves the six states in the Mid-Atlantic region (Pennsylvania, New Jersey, Delaware, Maryland, Virginia and West Virginia) plus the District of Columbia. The goals of the Mid-Atlantic CEAC are to promote the adoption of Combined Heat and Power (CHP), Waste Heat Recovery (WHR) and District Energy Systems (DES) in the Mid Atlantic area through education and technical support to more than 1,200 regional industry and government representatives in the region. The successful promotion of these technologies by the MACEAC was accomplished through the following efforts; (1)The MACEAC developed a series of technology transfer networks with State energy and environmental offices, Association of Energy Engineers local chapters, local community development organizations, utilities and, Penn State Department of Architectural Engineering alumni and their firms to effectively educate local practitioners about the energy utilization, environmental and economic advantages of CHP, WHR and DES; (2) Completed assessments of the regional technical and market potential for CHP, WHR and DE technologies application in the context of state specific energy prices, state energy and efficiency portfolio development. The studies were completed for Pennsylvania, New Jersey and Maryland and included a set of incentive adoption probability models used as a to guide during implementation discussions with State energy policy makers; (3) Using the technical and market assessments and adoption incentive models, the Mid Atlantic CEAC developed regional strategic action plans for the promotion of CHP Application technology for Pennsylvania, New Jersey and Maryland; (4) The CHP market assessment and incentive adoption model information was discussed, on a continuing basis, with relevant state agencies, policy makers and Public Utility Commission organizations resulting in CHP favorable incentive

  10. Techbelt Energy Innovation Center

    Energy Technology Data Exchange (ETDEWEB)

    Marie, Hazel [Youngstown State Univ., OH (United States); Nestic, Dave [TechBelt Energy Innovation Center, Warren, OH (United States); Hripko, Michael [Youngstown State Univ., OH (United States); Abraham, Martin [Youngstown State Univ., OH (United States)

    2017-06-30

    This project consisted of three main components 1) The primary goal of the project was to renovate and upgrade an existing commercial building to the highest possible environmentally sustainable level for the purpose of creating an energy incubator. This initiative was part of the Infrastructure Technologies Program, through which a sustainable energy demonstration facility was to be created and used as a research and community outreach base for sustainable energy product and process incubation; 2) In addition, fundamental energy related research on wind energy was performed; a shrouded wind turbine on the Youngstown State University campus was commissioned; and educational initiatives were implemented; and 3) The project also included an education and outreach component to inform and educate the public in sustainable energy production and career opportunities. Youngstown State University and the Tech Belt Energy Innovation Center (TBEIC) renovated a 37,000 square foot urban building which is now being used as a research and development hub for the region’s energy technology innovation industry. The building houses basic research facilities and business development in an incubator format. In addition, the TBEIC performs community outreach and education initiatives in advanced and sustainable energy. The building is linked to a back warehouse which will eventually be used as a build-out for energy laboratory facilities. The projects research component investigated shrouded wind turbines, and specifically the “Windcube” which was renamed the “Wind Sphere” during the course of the project. There was a specific focus on the development in the theory of shrouded wind turbines. The goal of this work was to increase the potential efficiency of wind turbines by improving the lift and drag characteristics. The work included computational modeling, scale models and full-sized design and construction of a test turbine. The full-sized turbine was built on the YSU

  11. NASA New England Outreach Center

    Science.gov (United States)

    2002-01-01

    The NASA New England Outreach Center in Nashua, New Hampshire was established to serve as a catalyst for heightening regional business awareness of NASA procurement, technology and commercialization opportunities. Emphasis is placed on small business participation, with the highest priority given to small disadvantaged businesses, women-owned businesses, HUBZone businesses, service disabled veteran owned businesses, and historically black colleges and universities and minority institutions. The Center assists firms and organizations to understand NASA requirements and to develop strategies to capture NASA related procurement and technology opportunities. The establishment of the NASA Outreach Center serves to stimulate business in a historically underserved area. NASA direct business awards have traditionally been highly present in the West, Midwest, South, and Southeast areas of the United States. The Center guides and assists businesses and organizations in the northeast to target opportunities within NASA and its prime contractors and capture business and technology opportunities. The Center employs an array of technology access, one-on-one meetings, seminars, site visits, and targeted conferences to acquaint Northeast firms and organizations with representatives from NASA and its prime contractors to learn about and discuss opportunities to do business and access the inventory of NASA technology. This stimulus of interaction also provides firms and organizations the opportunity to propose the use of their developed technology and ideas for current and future requirements at NASA. The Center provides a complement to the NASA Northeast Regional Technology Transfer Center in developing prospects for commercialization of NASA technology. In addition, the Center responds to local requests for assistance and NASA material and documents, and is available to address immediate concerns and needs in assessing opportunities, timely support to interact with NASA Centers on

  12. Acoustic Center or Time Origin?

    DEFF Research Database (Denmark)

    Staffeldt, Henrik

    1999-01-01

    The paper discusses the acoustic center in relation to measurements of loudspeaker polar data. Also, it presents the related concept time origin and discusses the deviation that appears between positions of the acoustic center found by wavefront based and time based measuring methods....

  13. User-Centered Design Gymkhana

    OpenAIRE

    Garreta Domingo, Muriel; Almirall Hill, Magí; Mor Pera, Enric

    2007-01-01

    The User-centered design (UCD) Gymkhana is a tool for human-computer interaction practitioners to demonstrate through a game the key user-centered design methods and how they interrelate in the design process.The target audiences are other organizational departments unfamiliar with UCD but whose work is related to the definition, cretaion, and update of a product service.

  14. Learning Centers: Development and Operation.

    Science.gov (United States)

    Bennie, Frances

    There has been in recent years a growing acceptance of individualized learning concepts. Learning Centers have come to be viewed as an economical and viable strategy for accommodating diverse learning styles and needs. This book provides the educator with an understanding of the learning center concept, its origins, present manifestations, and…

  15. Stennis Space Center Virtual Tour

    Science.gov (United States)

    2009-01-01

    Have you ever wanted to visit Stennis Space Center? Or perhaps you have and you're ready to come back. Either way, you can visit Stennis Space Center from anywhere in world! Click on the video to begin your tour.

  16. PTSD: National Center for PTSD

    Medline Plus

    Full Text Available ... Section Home PTSD Overview PTSD Basics Return from War Specific to Women Types of Trauma War Terrorism Violence and Abuse Disasters Is it PTSD? ... Combat Veterans & their Families Readjustment Counseling (Vet Centers) War Related Illness & Injury Study Center Homeless Veterans Returning ...

  17. The generation of germinal centers

    NARCIS (Netherlands)

    Kroese, Fransciscus Gerardus Maria

    1987-01-01

    Germinal centers are clusters of B lymphoblastoid cells that develop after antigenic stimulation in follicles of peripheral lymphoid organs. These structures are thought to play a major role in the generation of B memory cells. This thesis is dealing with several aspects of these germinal centers. I

  18. PTSD: National Center for PTSD

    Medline Plus

    Full Text Available ... Section Home PTSD Overview PTSD Basics Return from War Specific to Women Types of Trauma War Terrorism Violence and Abuse Disasters Is it PTSD? ... Combat Veterans & their Families Readjustment Counseling (Vet Centers) War Related Illness & Injury Study Center Homeless Veterans Returning ...

  19. Launch Vehicle Control Center Architectures

    Science.gov (United States)

    Watson, Michael D.; Epps, Amy; Woodruff, Van; Vachon, Michael Jacob; Monreal, Julio; Levesque, Marl; Williams, Randall; Mclaughlin, Tom

    2014-01-01

    Launch vehicles within the international community vary greatly in their configuration and processing. Each launch site has a unique processing flow based on the specific launch vehicle configuration. Launch and flight operations are managed through a set of control centers associated with each launch site. Each launch site has a control center for launch operations; however flight operations support varies from being co-located with the launch site to being shared with the space vehicle control center. There is also a nuance of some having an engineering support center which may be co-located with either the launch or flight control center, or in a separate geographical location altogether. A survey of control center architectures is presented for various launch vehicles including the NASA Space Launch System (SLS), United Launch Alliance (ULA) Atlas V and Delta IV, and the European Space Agency (ESA) Ariane 5. Each of these control center architectures shares some similarities in basic structure while differences in functional distribution also exist. The driving functions which lead to these factors are considered and a model of control center architectures is proposed which supports these commonalities and variations.

  20. Allegheny County Kane Regional Center Census

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Total number of residents in each Kane Regional Center facility by race and gender. The Kane Regional Centers are skilled nursing and rehabilitation centers run by...

  1. Stennis Visitors Center and Administrative Complex

    Science.gov (United States)

    1990-01-01

    This aerial view shows the John C. Stennis Space Center Visitors Center and main Administrative complex. The Stennis Space Center in Hancock County, Mississippi is NASA's lead center for rocket propulsion testing and for commercial remote sensing.

  2. Pengembangan Model Manajemen ICT Center

    Directory of Open Access Journals (Sweden)

    Hakkun Elmunsyah

    2013-01-01

    Full Text Available Abstrak: Pengembangan Model Manajemen ICT Center. Kemendiknas telah melakukan investasi  cukup besar berupa pembangunan Jejaring komputer pendidikan nasional yang disebut Jaringan Pendidikan Nasional (Jardiknas, pada sekolah menengah kejuruan (SMK di seluruh Indonesia yang dikenal dengan nama ICT center. Penelitian ini bertujuan untuk menemukan model manajemen ICT center sesuai karakteristik SMK sehingga dapat memberikan kontribusi mutu pada SMK tersebut. Penelitian ini merupakan penelitian pengembangan atau Research and Development yang dikembangkan oleh Borg and Gall. Hasil secara keseluruhan penelitian menunjukkan berdasarkan uji coba keefektivan kinerja manajemen pada skala terbatas dan lebih luas menunjukkan bahwa model manajemen ICT center memenuhi kriteria sangat efektif. Kata-kata kunci: Jardiknas, SMK, model manajemen ICT center, kontribusi mutu

  3. Danish Chinese Center for Nanometals

    DEFF Research Database (Denmark)

    Winther, Grethe

    of the Center is located at Risø DTU. The Center investigates metals, including light metals and steels, with internal length scales ranging from a few nanometers to a few micrometers. The structural morphologies studied are highly diverse, including structures composed of grain boundaries, twins......The Danish-Chinese Center for Nanometals is funded by the Danish National Research Foundation and the National Natural Science Foundation of China. The Chinese partners in the Center are Institute of Metal Research in Shenyang, Tsinghua University and Chongqing University. The Danish part...... and dislocation boundaries and spatial arrangements spanning from equiaxed to lamellar structures. The scientific focus is to understand and control the mechanisms and parameters determining the mechanical and physical properties of such metals as well as their thermal stability. An overview of the Center...

  4. Information Sciences: Information Centers and Special Libraries.

    Science.gov (United States)

    BIBLIOGRAPHIES, *LIBRARIES, *TECHNICAL INFORMATION CENTERS, DATA PROCESSING, AUTOMATION, INFORMATION SYSTEMS, DOCUMENTS, INFORMATION CENTERS, INFORMATION RETRIEVAL, SUBJECT INDEXING, INFORMATION SCIENCES .

  5. Exploration of solar photospheric magnetic field data sets using the UCSD tomography

    Science.gov (United States)

    Jackson, B. V.; Yu, H.-S.; Buffington, A.; Hick, P. P.; Nishimura, N.; Nozaki, N.; Tokumaru, M.; Fujiki, K.; Hayashi, K.

    2016-12-01

    This article investigates the use of two different types of National Solar Observatory magnetograms and two different coronal field modeling techniques over 10 years. Both the "open-field" Current Sheet Source Surface (CSSS) and a "closed-field" technique using CSSS modeling are compared. The University of California, San Diego, tomographic modeling, using interplanetary scintillation data from Japan, provides the global velocities to extrapolate these fields outward, which are then compared with fields measured in situ near Earth. Although the open-field technique generally gives a better result for radial and tangential fields, we find that a portion of the closed extrapolated fields measured in situ near Earth comes from the direct outward mapping of these fields in the low solar corona. All three closed-field components are nonzero at 1 AU and are compared with the appropriate magnetometer values. A significant positive correlation exists between these closed-field components and the in situ measurements over the last 10 years. We determine that a small fraction of the static low-coronal component flux, which includes the Bn (north-south) component, regularly escapes from closed-field regions. The closed-field flux fraction varies by about a factor of 3 from a mean value during this period, relative to the magnitude of the field components measured in situ near Earth, and maximizes in 2014. This implies that a relatively more efficient process for closed-flux escape occurs near solar maximum. We also compare and find that the popular Potential Field Source Surface and CSSS model closed fields are nearly identical in sign and strength.

  6. TU-CD-BRD-03: UCSD Experience, with Focus On Implementing Change

    Energy Technology Data Exchange (ETDEWEB)

    Brown, D. [University of California, San Diego (United States)

    2015-06-15

    It has long been standard practice in radiation oncology to report internally when a patient’s treatment has not gone as planned and to report events to regulatory agencies when legally required. Most potential errors are caught early and never affect the patient. Quality assurance steps routinely prevent errors from reaching the patient, and these “near misses” are much more frequent than treatment errors. A growing number of radiation oncology facilities have implemented incident learning systems to report and analyze both errors and near misses. Using the term “incident learning” instead of “event reporting” emphasizes the need to use these experiences to change the practice and make future errors less likely and promote an educational, non-punitive environment. There are challenges in making such a system practical and effective. Speakers from institutions of different sizes and practice environments will share their experiences on how to make such a system work and what benefits their clinics have accrued. Questions that will be addressed include: How to create a system that is easy for front line staff to access How to motivate staff to report How to promote the system as positive and educational and not punitive or demeaning How to organize the team for reviewing and responding to reports How to prioritize which reports to discuss in depth How not to dismiss the rest How to identify underlying causes How to design corrective actions and implement change How to develop useful statistics and analysis tools How to coordinate a departmental system with a larger risk management system How to do this without a dedicated quality manager Some speakers’ experience is with in-house systems and some will share experience with the AAPM/ASTRO national Radiation Oncology Incident Learning System (RO-ILS). Reports intended to be of value nationally need to be comprehensible to outsiders; examples of useful reports will be shown. There will be ample time set aside for audience members to contribute to the discussion. Learning Objectives: Learn how to promote the use of an incident learning system in a clinic. Learn how to convert “event reporting” into “incident learning”. See examples of practice changes that have come out of learning systems. Learn how the RO-ILS system can be used as a primary internal learning system. Learn how to create succinct, meaningful reports useful to outside readers. Gary Ezzell chairs the AAPM committee overseeing RO-ILS and has received an honorarium from ASTRO for working on the committee reviewing RO-ILS reports. Derek Brown is a director of http://TreatSafely.org . Brett Miller has previously received travel expenses and an honorarium from Varian. Phillip Beron has nothing to report.

  7. Emergency Operations Center ribbon cutting

    Science.gov (United States)

    2009-01-01

    Center Director Gene Goldman and special guests celebrate the opening of the site's new Emergency Operations Center on June 2. Participants included (l t r): Steven Cooper, deputy director of the National Weather Service Southern Region; Tom Luedtke, NASA associate administrator for institutions and management; Charles Scales, NASA associate deputy administrator; Mississippi Gov. Haley Barbour; Gene Goldman, director of Stennis Space Center; Jack Forsythe, NASA assistant administrator for the Office of Security and Program Protection; Dr. Richard Williams, NASA chief health and medical officer; and Weldon Starks, president of Starks Contracting Company Inc. of Biloxi.

  8. Multifunctional centers in rural areas

    DEFF Research Database (Denmark)

    Svendsen, Gunnar Lind Haase

    2009-01-01

    invest in multifunctional centers in which the local public school is the dynamo. This in order to increase local levels of social as well as human capital. Ideally, such centers should contain both public services such as school, library and health care, private enterprises as hairdressers and banks......, and facilities for local associations as theatre scenes and sports halls. The centers should be designed to secure both economies of scale and geographic proximity. Empirical evidence indicates that such large meeting places in fact foster physical and social cohesion, as well as human capital and informal...

  9. Computational Fluid Dynamics: Algorithms and Supercomputers

    Science.gov (United States)

    1988-03-01

    became an issue. Hanon Potash, the SCS architect, has often claimed that the key to designing a vector machine is to "super-impose" a scalar design and...of Thompson ([123], Chapter 6.8) is given in the next chapter. 5.4 ITERATIVE ALGORITHMS In order to illustrate restructuring of iterative methods for t...and development of grid generation using Laplace’s and Poisson’s equations has been done by Thompson (1979) and his co-workers [123]. Figure 6.1: Basic

  10. The QCDOC supercomputer: hardware, software, and performance

    CERN Document Server

    Boyle, P A; Wettig, T

    2003-01-01

    An overview is given of the QCDOC architecture, a massively parallel and highly scalable computer optimized for lattice QCD using system-on-a-chip technology. The heart of a single node is the PowerPC-based QCDOC ASIC, developed in collaboration with IBM Research, with a peak speed of 1 GFlop/s. The nodes communicate via high-speed serial links in a 6-dimensional mesh with nearest-neighbor connections. We find that highly optimized four-dimensional QCD code obtains over 50% efficiency in cycle accurate simulations of QCDOC, even for problems of fixed computational difficulty run on tens of thousands of nodes. We also provide an overview of the QCDOC operating system, which manages and runs QCDOC applications on partitions of variable dimensionality. Finally, the SciDAC activity for QCDOC and the message-passing interface QMP specified as a part of the SciDAC effort are discussed for QCDOC. We explain how to make optimal use of QMP routines on QCDOC in conjunction with existing C and C++ lattice QCD codes, inc...

  11. Supercomputer modeling of volcanic eruption dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Kieffer, S.W. [Arizona State Univ., Tempe, AZ (United States); Valentine, G.A. [Los Alamos National Lab., NM (United States); Woo, Mahn-Ling [Arizona State Univ., Tempe, AZ (United States)

    1995-06-01

    Our specific goals are to: (1) provide a set of models based on well-defined assumptions about initial and boundary conditions to constrain interpretations of observations of active volcanic eruptions--including movies of flow front velocities, satellite observations of temperature in plumes vs. time, and still photographs of the dimensions of erupting plumes and flows on Earth and other planets; (2) to examine the influence of subsurface conditions on exit plane conditions and plume characteristics, and to compare the models of subsurface fluid flow with seismic constraints where possible; (3) to relate equations-of-state for magma-gas mixtures to flow dynamics; (4) to examine, in some detail, the interaction of the flowing fluid with the conduit walls and ground topography through boundary layer theory so that field observations of erosion and deposition can be related to fluid processes; and (5) to test the applicability of existing two-phase flow codes for problems related to the generation of volcanic long-period seismic signals; (6) to extend our understanding and simulation capability to problems associated with emplacement of fragmental ejecta from large meteorite impacts.

  12. Using Supercomputers to Probe the Early Universe

    Energy Technology Data Exchange (ETDEWEB)

    Giorgi, Elena Edi [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-17

    For decades physicists have been trying to decipher the first moments after the Big Bang. Using very large telescopes, for example, scientists scan the skies and look at how fast galaxies move. Satellites study the relic radiation left from the Big Bang, called the cosmic microwave background radiation. And finally, particle colliders, like the Large Hadron Collider at CERN, allow researchers to smash protons together and analyze the debris left behind by such collisions. Physicists at Los Alamos National Laboratory, however, are taking a different approach: they are using computers. In collaboration with colleagues at University of California San Diego, the Los Alamos researchers developed a computer code, called BURST, that can simulate conditions during the first few minutes of cosmological evolution.

  13. LAPACK: Linear algebra software for supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Bischof, C.H.

    1991-01-01

    This paper presents an overview of the LAPACK library, a portable, public-domain library to solve the most common linear algebra problems. This library provides a uniformly designed set of subroutines for solving systems of simultaneous linear equations, least-squares problems, and eigenvalue problems for dense and banded matrices. We elaborate on the design methodologies incorporated to make the LAPACK codes efficient on today's high-performance architectures. In particular, we discuss the use of block algorithms and the reliance on the Basic Linear Algebra Subprograms. We present performance results that show the suitability of the LAPACK approach for vector uniprocessors and shared-memory multiprocessors. We also address some issues that have to be dealt with in tuning LAPACK for specific architectures. Lastly, we present results that show that the LAPACK software can be adapted with little effort to distributed-memory environments, and we discuss future efforts resulting from this project. 31 refs., 10 figs., 2 tabs.

  14. Foundry provides the network backbone for supercomputing

    CERN Multimedia

    2003-01-01

    Some of the results from the fourth annual High-Performance Bandwidth Challenge, held in conjunction with SC2003, the international conference on high-performance computing and networking which occurred last week in Phoenix, AZ (1/2 page).

  15. Supercomputers and biological sequence comparison algorithms.

    Science.gov (United States)

    Core, N G; Edmiston, E W; Saltz, J H; Smith, R M

    1989-12-01

    Comparison of biological (DNA or protein) sequences provides insight into molecular structure, function, and homology and is increasingly important as the available databases become larger and more numerous. One method of increasing the speed of the calculations is to perform them in parallel. We present the results of initial investigations using two dynamic programming algorithms on the Intel iPSC hypercube and the Connection Machine as well as an inexpensive, heuristically-based algorithm on the Encore Multimax.

  16. Dal CERN, flusso si dati a una media di 600 megabytes al secondo per dieci giorni consecutivi

    CERN Multimedia

    2005-01-01

    The supercomputer Grid took up successfully its first technologic challenge. Egiht supercomputing centers have supported on internet a continuous flow of data from CERN in Geneva and directed them to seven centers in Europe and United States

  17. grid will help physicists' global hunt for particles Researchers have begun running experiments with the MidWest Tier 2 Center, one of five regional computing centers in the US.

    CERN Multimedia

    Ames, Ben

    2006-01-01

    "When physicists at Switzerland's CERN laboratory turn on their newsest particle collider in 2007, they will rely on computer scientists in Chicago and Indianapolis to help sift through the results using a worldwide supercomputing grid." (1/2 page)

  18. Records Center Program Billing System

    Data.gov (United States)

    National Archives and Records Administration — RCPBS supports the Records center programs (RCP) in producing invoices for the storage (NARS-5) and servicing of National Archives and Records Administration’s...

  19. Center for Beam Physics, 1993

    Energy Technology Data Exchange (ETDEWEB)

    1994-05-01

    The Center for Beam Physics is a multi-disciplinary research and development unit in the Accelerator and Fusion Research Division at Lawrence Berkeley Laboratory. At the heart of the Center`s mission is the fundamental quest for mechanisms of acceleration, radiation and focusing of energy. Dedicated to exploring the frontiers of the physics of (and with) particle and photon beams, its primary mission is to promote the science and technology of the production, manipulation, storage and control systems of charged particles and photons. The Center serves this mission via conceptual studies, theoretical and experimental research, design and development, institutional project involvement, external collaborations, association with industry and technology transfer. This roster provides a glimpse at the scientists, engineers, technical support, students, and administrative staff that make up this team and a flavor of their multifaceted activities during 1993.

  20. Clean Energy Solutions Center Services

    Energy Technology Data Exchange (ETDEWEB)

    2016-03-01

    The Solutions Center offers no-cost expert policy assistance, webinars and training forums, clean energy policy reports, data, and tools provided in partnership with more than 35 leading international and regional clean energy organizations.