WorldWideScience

Sample records for research supercomputer center

  1. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    Science.gov (United States)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  2. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    Energy Technology Data Exchange (ETDEWEB)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S [Earth Sciences Department. Barcelona Supercomputing Center. Barcelona (Spain); Cuevas, E [Izanaa Atmospheric Research Center. Agencia Estatal de Meteorologia, Tenerife (Spain); Nickovic, S [Atmospheric Research and Environment Branch, World Meteorological Organization, Geneva (Switzerland)], E-mail: carlos.perez@bsc.es

    2009-03-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  3. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    International Nuclear Information System (INIS)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S; Cuevas, E; Nickovic, S

    2009-01-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  4. Research center Juelich to install Germany's most powerful supercomputer new IBM System for science and research will achieve 5.8 trillion computations per second

    CERN Multimedia

    2002-01-01

    "The Research Center Juelich, Germany, and IBM today announced that they have signed a contract for the delivery and installation of a new IBM supercomputer at the Central Institute for Applied Mathematics" (1/2 page).

  5. Centralized supercomputer support for magnetic fusion energy research

    International Nuclear Information System (INIS)

    Fuss, D.; Tull, G.G.

    1984-01-01

    High-speed computers with large memories are vital to magnetic fusion energy research. Magnetohydrodynamic (MHD), transport, equilibrium, Vlasov, particle, and Fokker-Planck codes that model plasma behavior play an important role in designing experimental hardware and interpreting the resulting data, as well as in advancing plasma theory itself. The size, architecture, and software of supercomputers to run these codes are often the crucial constraints on the benefits such computational modeling can provide. Hence, vector computers such as the CRAY-1 offer a valuable research resource. To meet the computational needs of the fusion program, the National Magnetic Fusion Energy Computer Center (NMFECC) was established in 1974 at the Lawrence Livermore National Laboratory. Supercomputers at the central computing facility are linked to smaller computer centers at each of the major fusion laboratories by a satellite communication network. In addition to providing large-scale computing, the NMFECC environment stimulates collaboration and the sharing of computer codes and data among the many fusion researchers in a cost-effective manner

  6. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee

    2011-11-15

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  7. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee; Kaushik, Dinesh; Winfer, Andrew

    2011-01-01

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  8. Role of supercomputers in magnetic fusion and energy research programs

    International Nuclear Information System (INIS)

    Killeen, J.

    1985-06-01

    The importance of computer modeling in magnetic fusion (MFE) and energy research (ER) programs is discussed. The need for the most advanced supercomputers is described, and the role of the National Magnetic Fusion Energy Computer Center in meeting these needs is explained

  9. An assessment of worldwide supercomputer usage

    Energy Technology Data Exchange (ETDEWEB)

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  10. A training program for scientific supercomputing users

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  11. Supercomputer applications in nuclear research

    International Nuclear Information System (INIS)

    Ishiguro, Misako

    1992-01-01

    The utilization of supercomputers in Japan Atomic Energy Research Institute is mainly reported. The fields of atomic energy research which use supercomputers frequently and the contents of their computation are outlined. What is vectorizing is simply explained, and nuclear fusion, nuclear reactor physics, the hydrothermal safety of nuclear reactors, the parallel property that the atomic energy computations of fluids and others have, the algorithm for vector treatment and the effect of speed increase by vectorizing are discussed. At present Japan Atomic Energy Research Institute uses two systems of FACOM VP 2600/10 and three systems of M-780. The contents of computation changed from criticality computation around 1970, through the analysis of LOCA after the TMI accident, to nuclear fusion research, the design of new type reactors and reactor safety assessment at present. Also the method of using computers advanced from batch processing to time sharing processing, from one-dimensional to three dimensional computation, from steady, linear to unsteady nonlinear computation, from experimental analysis to numerical simulation and so on. (K.I.)

  12. What is supercomputing ?

    International Nuclear Information System (INIS)

    Asai, Kiyoshi

    1992-01-01

    Supercomputing means the high speed computation using a supercomputer. Supercomputers and the technical term ''supercomputing'' have spread since ten years ago. The performances of the main computers installed so far in Japan Atomic Energy Research Institute are compared. There are two methods to increase computing speed by using existing circuit elements, parallel processor system and vector processor system. CRAY-1 is the first successful vector computer. Supercomputing technology was first applied to meteorological organizations in foreign countries, and to aviation and atomic energy research institutes in Japan. The supercomputing for atomic energy depends on the trend of technical development in atomic energy, and the contents are divided into the increase of computing speed in existing simulation calculation and the acceleration of the new technical development of atomic energy. The examples of supercomputing in Japan Atomic Energy Research Institute are reported. (K.I.)

  13. Graphics supercomputer for computational fluid dynamics research

    Science.gov (United States)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  14. Supercomputing Centers and Electricity Service Providers

    DEFF Research Database (Denmark)

    Patki, Tapasya; Bates, Natalie; Ghatikar, Girish

    2016-01-01

    from a detailed, quantitative survey-based analysis and compare the perspectives of the European grid and SCs to the ones of the United States (US). We then show that contrary to the expectation, SCs in the US are more open toward cooperating and developing demand-management strategies with their ESPs......Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates...... this problem. In order to develop a symbiotic relationship between the SCs and their ESPs and to support effective power management at all levels, it is critical to understand and analyze how the existing relationships were formed and how these are expected to evolve. In this paper, we first present results...

  15. Supercomputational science

    CERN Document Server

    Wilson, S

    1990-01-01

    In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School ...

  16. Integration of Panda Workload Management System with supercomputers

    Science.gov (United States)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  17. Automatic discovery of the communication network topology for building a supercomputer model

    Science.gov (United States)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  18. Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data

    Science.gov (United States)

    Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.

    2018-03-01

    One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.

  19. Computational Dimensionalities of Global Supercomputing

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2013-12-01

    Full Text Available This Invited Paper pertains to subject of my Plenary Keynote Speech at the 17th World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI 2013 held in Orlando, Florida on July 9-12, 2013. The title of my Plenary Keynote Speech was: "Dimensionalities of Computation: from Global Supercomputing to Data, Text and Web Mining" but this Invited Paper will focus only on the "Computational Dimensionalities of Global Supercomputing" and is based upon a summary of the contents of several individual articles that have been previously written with myself as lead author and published in [75], [76], [77], [78], [79], [80] and [11]. The topics of these of the Plenary Speech included Overview of Current Research in Global Supercomputing [75], Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing [76], Data Mining Supercomputing with SAS™ JMP® Genomics ([77], [79], [80], and Visualization by Supercomputing Data Mining [81]. ______________________ [11.] Committee on the Future of Supercomputing, National Research Council (2003, The Future of Supercomputing: An Interim Report, ISBN-13: 978-0-309-09016- 2, http://www.nap.edu/catalog/10784.html [75.] Segall, Richard S.; Zhang, Qingyu and Cook, Jeffrey S.(2013, "Overview of Current Research in Global Supercomputing", Proceedings of Forty- Fourth Meeting of Southwest Decision Sciences Institute (SWDSI, Albuquerque, NM, March 12-16, 2013. [76.] Segall, Richard S. and Zhang, Qingyu (2010, "Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing", Proceedings of 5th INFORMS Workshop on Data Mining and Health Informatics, Austin, TX, November 6, 2010. [77.] Segall, Richard S., Zhang, Qingyu and Pierce, Ryan M.(2010, "Data Mining Supercomputing with SAS™ JMP®; Genomics: Research-in-Progress, Proceedings of 2010 Conference on Applied Research in Information Technology, sponsored by

  20. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  1. TOP500 Supercomputers for November 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  2. Proceedings of the first energy research power supercomputer users symposium

    International Nuclear Information System (INIS)

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. ''Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University

  3. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 1, Issue 2

    Science.gov (United States)

    2011-01-01

    area and the researchers working on these projects. Also inside: news from the AHPCRC consortium partners at Morgan State University and the NASA ...Computing Research Center is provided by the supercomputing and research facilities at Stanford University and at the NASA Ames Research Center at...atomic and molecular level, he said. He noted that “every general would like to have” a Star Trek -like holodeck, where holographic avatars could

  4. Supercomputing and related national projects in Japan

    International Nuclear Information System (INIS)

    Miura, Kenichi

    1985-01-01

    Japanese supercomputer development activities in the industry and research projects are outlined. Architecture, technology, software, and applications of Fujitsu's Vector Processor Systems are described as an example of Japanese supercomputers. Applications of supercomputers to high energy physics are also discussed. (orig.)

  5. Assessment techniques for a learning-centered curriculum: evaluation design for adventures in supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Helland, B. [Ames Lab., IA (United States); Summers, B.G. [Oak Ridge National Lab., TN (United States)

    1996-09-01

    As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. The data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.

  6. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Klimentov, A [Brookhaven National Laboratory (BNL); Maeno, T [Brookhaven National Laboratory (BNL); Nilsson, P [Brookhaven National Laboratory (BNL); Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  7. TOP500 Supercomputers for June 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  8. TOP500 Supercomputers for June 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  9. Desktop supercomputer: what can it do?

    Science.gov (United States)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-12-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  10. Activity report of Computing Research Center

    Energy Technology Data Exchange (ETDEWEB)

    1997-07-01

    On April 1997, National Laboratory for High Energy Physics (KEK), Institute of Nuclear Study, University of Tokyo (INS), and Meson Science Laboratory, Faculty of Science, University of Tokyo began to work newly as High Energy Accelerator Research Organization after reconstructing and converting their systems, under aiming at further development of a wide field of accelerator science using a high energy accelerator. In this Research Organization, Applied Research Laboratory is composed of four Centers to execute assistance of research actions common to one of the Research Organization and their relating research and development (R and D) by integrating the present four centers and their relating sections in Tanashi. What is expected for the assistance of research actions is not only its general assistance but also its preparation and R and D of a system required for promotion and future plan of the research. Computer technology is essential to development of the research and can communize for various researches in the Research Organization. On response to such expectation, new Computing Research Center is required for promoting its duty by coworking and cooperating with every researchers at a range from R and D on data analysis of various experiments to computation physics acting under driving powerful computer capacity such as supercomputer and so forth. Here were described on report of works and present state of Data Processing Center of KEK at the first chapter and of the computer room of INS at the second chapter and on future problems for the Computing Research Center. (G.K.)

  11. Desktop supercomputer: what can it do?

    International Nuclear Information System (INIS)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-01-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  12. Applications of supercomputing and the utility industry: Calculation of power transfer capabilities

    International Nuclear Information System (INIS)

    Jensen, D.D.; Behling, S.R.; Betancourt, R.

    1990-01-01

    Numerical models and iterative simulation using supercomputers can furnish cost-effective answers to utility industry problems that are all but intractable using conventional computing equipment. An example of the use of supercomputers by the utility industry is the determination of power transfer capability limits for power transmission systems. This work has the goal of markedly reducing the run time of transient stability codes used to determine power distributions following major system disturbances. To date, run times of several hours on a conventional computer have been reduced to several minutes on state-of-the-art supercomputers, with further improvements anticipated to reduce run times to less than a minute. In spite of the potential advantages of supercomputers, few utilities have sufficient need for a dedicated in-house supercomputing capability. This problem is resolved using a supercomputer center serving a geographically distributed user base coupled via high speed communication networks

  13. Supercomputing - Use Cases, Advances, The Future (1/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the first day, we will focus on the history and theory of supercomputing, the top500 list and the hardware that makes supercomputers tick. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP an...

  14. Supercomputing - Use Cases, Advances, The Future (2/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the second day, we will focus on software and software paradigms driving supercomputers, workloads that need supercomputing treatment, advances in technology and possible future developments. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and i...

  15. Supercomputers to transform Science

    CERN Multimedia

    2006-01-01

    "New insights into the structure of space and time, climate modeling, and the design of novel drugs, are but a few of the many research areas that will be transforned by the installation of three supercomputers at the Unversity of Bristol." (1/2 page)

  16. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Science.gov (United States)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  17. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    International Nuclear Information System (INIS)

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; De, K; Oleynik, D; Jha, S; Wells, J

    2016-01-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  18. Adaptability of supercomputers to nuclear computations

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Ishiguro, Misako; Matsuura, Toshihiko.

    1983-01-01

    Recently in the field of scientific and technical calculation, the usefulness of supercomputers represented by CRAY-1 has been recognized, and they are utilized in various countries. The rapid computation of supercomputers is based on the function of vector computation. The authors investigated the adaptability to vector computation of about 40 typical atomic energy codes for the past six years. Based on the results of investigation, the adaptability of the function of vector computation that supercomputers have to atomic energy codes, the problem regarding the utilization and the future prospect are explained. The adaptability of individual calculation codes to vector computation is largely dependent on the algorithm and program structure used for the codes. The change to high speed by pipeline vector system, the investigation in the Japan Atomic Energy Research Institute and the results, and the examples of expressing the codes for atomic energy, environmental safety and nuclear fusion by vector are reported. The magnification of speed up for 40 examples was from 1.5 to 9.0. It can be said that the adaptability of supercomputers to atomic energy codes is fairly good. (Kako, I.)

  19. Tryton Supercomputer Capabilities for Analysis of Massive Data Streams

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2015-09-01

    Full Text Available The recently deployed supercomputer Tryton, located in the Academic Computer Center of Gdansk University of Technology, provides great means for massive parallel processing. Moreover, the status of the Center as one of the main network nodes in the PIONIER network enables the fast and reliable transfer of data produced by miscellaneous devices scattered in the area of the whole country. The typical examples of such data are streams containing radio-telescope and satellite observations. Their analysis, especially with real-time constraints, can be challenging and requires the usage of dedicated software components. We propose a solution for such parallel analysis using the supercomputer, supervised by the KASKADA platform, which with the conjunction with immerse 3D visualization techniques can be used to solve problems such as pulsar detection and chronometric or oil-spill simulation on the sea surface.

  20. Japanese supercomputer technology

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Ewald, R.H.; Worlton, W.J.

    1982-01-01

    In February 1982, computer scientists from the Los Alamos National Laboratory and Lawrence Livermore National Laboratory visited several Japanese computer manufacturers. The purpose of these visits was to assess the state of the art of Japanese supercomputer technology and to advise Japanese computer vendors of the needs of the US Department of Energy (DOE) for more powerful supercomputers. The Japanese foresee a domestic need for large-scale computing capabilities for nuclear fusion, image analysis for the Earth Resources Satellite, meteorological forecast, electrical power system analysis (power flow, stability, optimization), structural and thermal analysis of satellites, and very large scale integrated circuit design and simulation. To meet this need, Japan has launched an ambitious program to advance supercomputer technology. This program is described

  1. National Energy Research Scientific Computing Center (NERSC): Advancing the frontiers of computational science and technology

    Energy Technology Data Exchange (ETDEWEB)

    Hules, J. [ed.

    1996-11-01

    National Energy Research Scientific Computing Center (NERSC) provides researchers with high-performance computing tools to tackle science`s biggest and most challenging problems. Founded in 1974 by DOE/ER, the Controlled Thermonuclear Research Computer Center was the first unclassified supercomputer center and was the model for those that followed. Over the years the center`s name was changed to the National Magnetic Fusion Energy Computer Center and then to NERSC; it was relocated to LBNL. NERSC, one of the largest unclassified scientific computing resources in the world, is the principal provider of general-purpose computing services to DOE/ER programs: Magnetic Fusion Energy, High Energy and Nuclear Physics, Basic Energy Sciences, Health and Environmental Research, and the Office of Computational and Technology Research. NERSC users are a diverse community located throughout US and in several foreign countries. This brochure describes: the NERSC advantage, its computational resources and services, future technologies, scientific resources, and computational science of scale (interdisciplinary research over a decade or longer; examples: combustion in engines, waste management chemistry, global climate change modeling).

  2. Status of supercomputers in the US

    International Nuclear Information System (INIS)

    Fernbach, S.

    1985-01-01

    Current Supercomputers; that is, the Class VI machines which first became available in 1976 are being delivered in greater quantity than ever before. In addition, manufacturers are busily working on Class VII machines to be ready for delivery in CY 1987. Mainframes are being modified or designed to take on some features of the supercomputers and new companies with the intent of either competing directly in the supercomputer arena or in providing entry-level systems from which to graduate to supercomputers are springing up everywhere. Even well founded organizations like IBM and CDC are adding machines with vector instructions in their repertoires. Japanese - manufactured supercomputers are also being introduced into the U.S. Will these begin to compete with those of U.S. manufacture. Are they truly competitive. It turns out that both from the hardware and software points of view they may be superior. We may be facing the same problems in supercomputers that we faced in videosystems

  3. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    Science.gov (United States)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  4. Convex unwraps its first grown-up supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Manuel, T.

    1988-03-03

    Convex Computer Corp.'s new supercomputer family is even more of an industry blockbuster than its first system. At a tenfold jump in performance, it's far from just an incremental upgrade over its first minisupercomputer, the C-1. The heart of the new family, the new C-2 processor, churning at 50 million floating-point operations/s, spawns a group of systems whose performance could pass for some fancy supercomputers-namely those of the Cray Research Inc. family. When added to the C-1, Convex's five new supercomputers create the C series, a six-member product group offering a performance range from 20 to 200 Mflops. They mark an important transition for Convex from a one-product high-tech startup to a multinational company with a wide-ranging product line. It's a tough transition but the Richardson, Texas, company seems to be doing it. The extended product line propels Convex into the upper end of the minisupercomputer class and nudges it into the low end of the big supercomputers. It positions Convex in an uncrowded segment of the market in the $500,000 to $1 million range offering 50 to 200 Mflops of performance. The company is making this move because the minisuper area, which it pioneered, quickly became crowded with new vendors, causing prices and gross margins to drop drastically.

  5. TOP500 Supercomputers for June 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  6. Building the Teraflops/Petabytes Production Computing Center

    International Nuclear Information System (INIS)

    Kramer, William T.C.; Lucas, Don; Simon, Horst D.

    1999-01-01

    In just one decade, the 1990s, supercomputer centers have undergone two fundamental transitions which require rethinking their operation and their role in high performance computing. The first transition in the early to mid-1990s resulted from a technology change in high performance computing architecture. Highly parallel distributed memory machines built from commodity parts increased the operational complexity of the supercomputer center, and required the introduction of intellectual services as equally important components of the center. The second transition is happening in the late 1990s as centers are introducing loosely coupled clusters of SMPs as their premier high performance computing platforms, while dealing with an ever-increasing volume of data. In addition, increasing network bandwidth enables new modes of use of a supercomputer center, in particular, computational grid applications. In this paper we describe what steps NERSC is taking to address these issues and stay at the leading edge of supercomputing centers.; N

  7. PRODEEDINGS OF RIKEN BNL RESEARCH CENTER WORKSHOP : HIGH PERFORMANCE COMPUTING WITH QCDOC AND BLUEGENE.

    Energy Technology Data Exchange (ETDEWEB)

    CHRIST,N.; DAVENPORT,J.; DENG,Y.; GARA,A.; GLIMM,J.; MAWHINNEY,R.; MCFADDEN,E.; PESKIN,A.; PULLEYBLANK,W.

    2003-03-11

    Staff of Brookhaven National Laboratory, Columbia University, IBM and the RIKEN BNL Research Center organized a one-day workshop held on February 28, 2003 at Brookhaven to promote the following goals: (1) To explore areas other than QCD applications where the QCDOC and BlueGene/L machines can be applied to good advantage, (2) To identify areas where collaboration among the sponsoring institutions can be fruitful, and (3) To expose scientists to the emerging software architecture. This workshop grew out of an informal visit last fall by BNL staff to the IBM Thomas J. Watson Research Center that resulted in a continuing dialog among participants on issues common to these two related supercomputers. The workshop was divided into three sessions, addressing the hardware and software status of each system, prospective applications, and future directions.

  8. A workbench for tera-flop supercomputing

    International Nuclear Information System (INIS)

    Resch, M.M.; Kuester, U.; Mueller, M.S.; Lang, U.

    2003-01-01

    Supercomputers currently reach a peak performance in the range of TFlop/s. With but one exception - the Japanese Earth Simulator - none of these systems has so far been able to also show a level of sustained performance for a variety of applications that comes close to the peak performance. Sustained TFlop/s are therefore rarely seen. The reasons are manifold and are well known: Bandwidth and latency both for main memory and for the internal network are the key internal technical problems. Cache hierarchies with large caches can bring relief but are no remedy to the problem. However, there are not only technical problems that inhibit the full exploitation by scientists of the potential of modern supercomputers. More and more organizational issues come to the forefront. This paper shows the approach of the High Performance Computing Center Stuttgart (HLRS) to deliver a sustained performance of TFlop/s for a wide range of applications from a large group of users spread over Germany. The core of the concept is the role of the data. Around this we design a simulation workbench that hides the complexity of interacting computers, networks and file systems from the user. (authors)

  9. TOP500 Supercomputers for June 2005

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  10. QCD on the BlueGene/L Supercomputer

    International Nuclear Information System (INIS)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-01-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented

  11. QCD on the BlueGene/L Supercomputer

    Science.gov (United States)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-03-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.

  12. Visualization environment of the large-scale data of JAEA's supercomputer system

    Energy Technology Data Exchange (ETDEWEB)

    Sakamoto, Kensaku [Japan Atomic Energy Agency, Center for Computational Science and e-Systems, Tokai, Ibaraki (Japan); Hoshi, Yoshiyuki [Research Organization for Information Science and Technology (RIST), Tokai, Ibaraki (Japan)

    2013-11-15

    On research and development of various fields of nuclear energy, visualization of calculated data is especially useful to understand the result of simulation in an intuitive way. Many researchers who run simulations on the supercomputer in Japan Atomic Energy Agency (JAEA) are used to transfer calculated data files from the supercomputer to their local PCs for visualization. In recent years, as the size of calculated data has gotten larger with improvement of supercomputer performance, reduction of visualization processing time as well as efficient use of JAEA network is being required. As a solution, we introduced a remote visualization system which has abilities to utilize parallel processors on the supercomputer and to reduce the usage of network resources by transferring data of intermediate visualization process. This paper reports a study on the performance of image processing with the remote visualization system. The visualization processing time is measured and the influence of network speed is evaluated by varying the drawing mode, the size of visualization data and the number of processors. Based on this study, a guideline for using the remote visualization system is provided to show how the system can be used effectively. An upgrade policy of the next system is also shown. (author)

  13. The TESS Science Processing Operations Center

    Science.gov (United States)

    Jenkins, Jon M.; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd; hide

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover approximately 1,000 small planets with R(sub p) less than 4 (solar radius) and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).

  14. The Pawsey Supercomputer geothermal cooling project

    Science.gov (United States)

    Regenauer-Lieb, K.; Horowitz, F.; Western Australian Geothermal Centre Of Excellence, T.

    2010-12-01

    The Australian Government has funded the Pawsey supercomputer in Perth, Western Australia, providing computational infrastructure intended to support the future operations of the Australian Square Kilometre Array radiotelescope and to boost next-generation computational geosciences in Australia. Supplementary funds have been directed to the development of a geothermal exploration well to research the potential for direct heat use applications at the Pawsey Centre site. Cooling the Pawsey supercomputer may be achieved by geothermal heat exchange rather than by conventional electrical power cooling, thus reducing the carbon footprint of the Pawsey Centre and demonstrating an innovative green technology that is widely applicable in industry and urban centres across the world. The exploration well is scheduled to be completed in 2013, with drilling due to commence in the third quarter of 2011. One year is allocated to finalizing the design of the exploration, monitoring and research well. Success in the geothermal exploration and research program will result in an industrial-scale geothermal cooling facility at the Pawsey Centre, and will provide a world-class student training environment in geothermal energy systems. A similar system is partially funded and in advanced planning to provide base-load air-conditioning for the main campus of the University of Western Australia. Both systems are expected to draw ~80-95 degrees C water from aquifers lying between 2000 and 3000 meters depth from naturally permeable rocks of the Perth sedimentary basin. The geothermal water will be run through absorption chilling devices, which only require heat (as opposed to mechanical work) to power a chilled water stream adequate to meet the cooling requirements. Once the heat has been removed from the geothermal water, licensing issues require the water to be re-injected back into the aquifer system. These systems are intended to demonstrate the feasibility of powering large-scale air

  15. Reactive flow simulations in complex geometries with high-performance supercomputing

    International Nuclear Information System (INIS)

    Rehm, W.; Gerndt, M.; Jahn, W.; Vogelsang, R.; Binninger, B.; Herrmann, M.; Olivier, H.; Weber, M.

    2000-01-01

    In this paper, we report on a modern field code cluster consisting of state-of-the-art reactive Navier-Stokes- and reactive Euler solvers that has been developed on vector- and parallel supercomputers at the research center Juelich. This field code cluster is used for hydrogen safety analyses of technical systems, for example, in the field of nuclear reactor safety and conventional hydrogen demonstration plants with fuel cells. Emphasis is put on the assessment of combustion loads, which could result from slow, fast or rapid flames, including transition from deflagration to detonation. As a sample of proof tests, the special tools have been tested for specific tasks, based on the comparison of experimental and numerical results, which are in reasonable agreement. (author)

  16. INTEL: Intel based systems move up in supercomputing ranks

    CERN Multimedia

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  17. World's fastest supercomputer opens up to users

    Science.gov (United States)

    Xin, Ling

    2016-08-01

    China's latest supercomputer - Sunway TaihuLight - has claimed the crown as the world's fastest computer according to the latest TOP500 list, released at the International Supercomputer Conference in Frankfurt in late June.

  18. OpenMP Performance on the Columbia Supercomputer

    Science.gov (United States)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  19. Application of Supercomputer Technologies for Simulation Of Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Vladimir Valentinovich Okrepilov

    2015-06-01

    Full Text Available To date, an extensive experience has been accumulated in investigation of problems related to quality, assessment of management systems, modeling of economic system sustainability. The performed studies have created a basis for development of a new research area — Economics of Quality. Its tools allow to use opportunities of model simulation for construction of the mathematical models adequately reflecting the role of quality in natural, technical, social regularities of functioning of the complex socio-economic systems. Extensive application and development of models, and also system modeling with use of supercomputer technologies, on our deep belief, will bring the conducted research of socio-economic systems to essentially new level. Moreover, the current scientific research makes a significant contribution to model simulation of multi-agent social systems and that is not less important, it belongs to the priority areas in development of science and technology in our country. This article is devoted to the questions of supercomputer technologies application in public sciences, first of all, — regarding technical realization of the large-scale agent-focused models (AFM. The essence of this tool is that owing to the power computer increase it has become possible to describe the behavior of many separate fragments of a difficult system, as socio-economic systems are. The article also deals with the experience of foreign scientists and practicians in launching the AFM on supercomputers, and also the example of AFM developed in CEMI RAS, stages and methods of effective calculating kernel display of multi-agent system on architecture of a modern supercomputer will be analyzed. The experiments on the basis of model simulation on forecasting the population of St. Petersburg according to three scenarios as one of the major factors influencing the development of socio-economic system and quality of life of the population are presented in the

  20. Use of high performance networks and supercomputers for real-time flight simulation

    Science.gov (United States)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  1. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  2. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  3. Visualization at supercomputing centers: the tale of little big iron and the three skinny guys.

    Science.gov (United States)

    Bethel, E W; van Rosendale, J; Southard, D; Gaither, K; Childs, H; Brugger, E; Ahern, S

    2011-01-01

    Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources-the "Big Iron." Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be-that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?

  4. Guide to dataflow supercomputing basic concepts, case studies, and a detailed example

    CERN Document Server

    Milutinovic, Veljko; Trifunovic, Nemanja; Giorgi, Roberto

    2015-01-01

    This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; revie

  5. TOP500 Supercomputers for November 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  6. Status reports of supercomputing astrophysics in Japan

    International Nuclear Information System (INIS)

    Nakamura, Takashi; Nagasawa, Mikio

    1990-01-01

    The Workshop on Supercomputing Astrophysics was held at National Laboratory for High Energy Physics (KEK, Tsukuba) from August 31 to September 2, 1989. More than 40 participants of physicists, astronomers were attendant and discussed many topics in the informal atmosphere. The main purpose of this workshop was focused on the theoretical activities in computational astrophysics in Japan. It was also aimed to promote effective collaboration between the numerical experimentists working on supercomputing technique. The various subjects of the presented papers of hydrodynamics, plasma physics, gravitating systems, radiative transfer and general relativity are all stimulating. In fact, these numerical calculations become possible now in Japan owing to the power of Japanese supercomputer such as HITAC S820, Fujitsu VP400E and NEC SX-2. (J.P.N.)

  7. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  8. Comments on the parallelization efficiency of the Sunway TaihuLight supercomputer

    OpenAIRE

    Végh, János

    2016-01-01

    In the world of supercomputers, the large number of processors requires to minimize the inefficiencies of parallelization, which appear as a sequential part of the program from the point of view of Amdahl's law. The recently suggested new figure of merit is applied to the recently presented supercomputer, and the timeline of "Top 500" supercomputers is scrutinized using the metric. It is demonstrated, that in addition to the computing performance and power consumption, the new supercomputer i...

  9. The ETA10 supercomputer system

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, Inc. ETA 10 is a next-generation supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamics MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed. (orig.)

  10. Development of seismic tomography software for hybrid supercomputers

    Science.gov (United States)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on

  11. FPS scientific and supercomputers computers in chemistry

    International Nuclear Information System (INIS)

    Curington, I.J.

    1987-01-01

    FPS Array Processors, scientific computers, and highly parallel supercomputers are used in nearly all aspects of compute-intensive computational chemistry. A survey is made of work utilizing this equipment, both published and current research. The relationship of the computer architecture to computational chemistry is discussed, with specific reference to Molecular Dynamics, Quantum Monte Carlo simulations, and Molecular Graphics applications. Recent installations of the FPS T-Series are highlighted, and examples of Molecular Graphics programs running on the FPS-5000 are shown

  12. Computational Science with the Titan Supercomputer: Early Outcomes and Lessons Learned

    Science.gov (United States)

    Wells, Jack

    2014-03-01

    Modeling and simulation with petascale computing has supercharged the process of innovation and understanding, dramatically accelerating time-to-insight and time-to-discovery. This presentation will focus on early outcomes from the Titan supercomputer at the Oak Ridge National Laboratory. Titan has over 18,000 hybrid compute nodes consisting of both CPUs and GPUs. In this presentation, I will discuss the lessons we have learned in deploying Titan and preparing applications to move from conventional CPU architectures to a hybrid machine. I will present early results of materials applications running on Titan and the implications for the research community as we prepare for exascale supercomputer in the next decade. Lastly, I will provide an overview of user programs at the Oak Ridge Leadership Computing Facility with specific information how researchers may apply for allocations of computing resources. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  13. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Belov, S; Kaplin, V; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2011-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  14. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  15. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  16. The ETA systems plans for supercomputers

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, is a class VII supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamic MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed

  17. Intelligent Personal Supercomputer for Solving Scientific and Technical Problems

    Directory of Open Access Journals (Sweden)

    Khimich, O.M.

    2016-09-01

    Full Text Available New domestic intellіgent personal supercomputer of hybrid architecture Inparkom_pg for the mathematical modeling of processes in the defense industry, engineering, construction, etc. was developed. Intelligent software for the automatic research and tasks of computational mathematics with approximate data of different structures was designed. Applied software to provide mathematical modeling problems in construction, welding and filtration processes was implemented.

  18. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    International Nuclear Information System (INIS)

    Cabrillo, I; Cabellos, L; Marco, J; Fernandez, J; Gonzalez, I

    2014-01-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  19. Wavelet transform-vector quantization compression of supercomputer ocean model simulation output

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J N; Brislawn, C M

    1992-11-12

    We describe a new procedure for efficient compression of digital information for storage and transmission purposes. The algorithm involves a discrete wavelet transform subband decomposition of the data set, followed by vector quantization of the wavelet transform coefficients using application-specific vector quantizers. The new vector quantizer design procedure optimizes the assignment of both memory resources and vector dimensions to the transform subbands by minimizing an exponential rate-distortion functional subject to constraints on both overall bit-rate and encoder complexity. The wavelet-vector quantization method, which originates in digital image compression. is applicable to the compression of other multidimensional data sets possessing some degree of smoothness. In this paper we discuss the use of this technique for compressing the output of supercomputer simulations of global climate models. The data presented here comes from Semtner-Chervin global ocean models run at the National Center for Atmospheric Research and at the Los Alamos Advanced Computing Laboratory.

  20. PNNL supercomputer to become largest computing resource on the Grid

    CERN Multimedia

    2002-01-01

    Hewlett Packard announced that the US DOE Pacific Northwest National Laboratory will connect a 9.3-teraflop HP supercomputer to the DOE Science Grid. This will be the largest supercomputer attached to a computer grid anywhere in the world (1 page).

  1. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  2. Cooperative visualization and simulation in a supercomputer environment

    International Nuclear Information System (INIS)

    Ruehle, R.; Lang, U.; Wierse, A.

    1993-01-01

    The article takes a closer look on the requirements being imposed by the idea to integrate all the components into a homogeneous software environment. To this end several methods for the distribtuion of applications in dependence of certain problem types are discussed. The currently available methods at the University of Stuttgart Computer Center for the distribution of applications are further explained. Finally the aims and characteristics of a European sponsored project, called PAGEIN, are explained, which fits perfectly into the line of developments at RUS. The aim of the project is to experiment with future cooperative working modes of aerospace scientists in a high speed distributed supercomputing environment. Project results will have an impact on the development of real future scientific application environments. (orig./DG)

  3. The TeraGyroid Experiment – Supercomputing 2003

    Directory of Open Access Journals (Sweden)

    R.J. Blake

    2005-01-01

    Full Text Available Amphiphiles are molecules with hydrophobic tails and hydrophilic heads. When dispersed in solvents, they self assemble into complex mesophases including the beautiful cubic gyroid phase. The goal of the TeraGyroid experiment was to study defect pathways and dynamics in these gyroids. The UK's supercomputing and USA's TeraGrid facilities were coupled together, through a dedicated high-speed network, into a single computational Grid for research work that peaked around the Supercomputing 2003 conference. The gyroids were modeled using lattice Boltzmann methods with parameter spaces explored using many 1283 and 3grid point simulations, this data being used to inform the world's largest three-dimensional time dependent simulation with 10243-grid points. The experiment generated some 2 TBytes of useful data. In terms of Grid technology the project demonstrated the migration of simulations (using Globus middleware to and fro across the Atlantic exploiting the availability of resources. Integration of the systems accelerated the time to insight. Distributed visualisation of the output datasets enabled the parameter space of the interactions within the complex fluid to be explored from a number of sites, informed by discourse over the Access Grid. The project was sponsored by EPSRC (UK and NSF (USA with trans-Atlantic optical bandwidth provided by British Telecommunications.

  4. Supercomputers Of The Future

    Science.gov (United States)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  5. NASA Advanced Supercomputing Facility Expansion

    Science.gov (United States)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  6. ATLAS Software Installation on Supercomputers

    CERN Document Server

    Undrus, Alexander; The ATLAS collaboration

    2018-01-01

    PowerPC and high performance computers (HPC) are important resources for computing in the ATLAS experiment. The future LHC data processing will require more resources than Grid computing, currently using approximately 100,000 cores at well over 100 sites, can provide. Supercomputers are extremely powerful as they use resources of hundreds of thousands CPUs joined together. However their architectures have different instruction sets. ATLAS binary software distributions for x86 chipsets do not fit these architectures, as emulation of these chipsets results in huge performance loss. This presentation describes the methodology of ATLAS software installation from source code on supercomputers. The installation procedure includes downloading the ATLAS code base as well as the source of about 50 external packages, such as ROOT and Geant4, followed by compilation, and rigorous unit and integration testing. The presentation reports the application of this procedure at Titan HPC and Summit PowerPC at Oak Ridge Computin...

  7. JINR supercomputer of the module type for event parallel analysis

    International Nuclear Information System (INIS)

    Kolpakov, I.F.; Senner, A.E.; Smirnov, V.A.

    1987-01-01

    A model of a supercomputer with 50 million of operations per second is suggested. Its realization allows one to solve JINR data analysis problems for large spectrometers (in particular DELPHY collaboration). The suggested module supercomputer is based on 32-bit commercial available microprocessor with a processing rate of about 1 MFLOPS. The processors are combined by means of VME standard busbars. MicroVAX-11 is a host computer organizing the operation of the system. Data input and output is realized via microVAX-11 computer periphery. Users' software is based on the FORTRAN-77. The supercomputer is connected with a JINR net port and all JINR users get an access to the suggested system

  8. Supercomputers and quantum field theory

    International Nuclear Information System (INIS)

    Creutz, M.

    1985-01-01

    A review is given of why recent simulations of lattice gauge theories have resulted in substantial demands from particle theorists for supercomputer time. These calculations have yielded first principle results on non-perturbative aspects of the strong interactions. An algorithm for simulating dynamical quark fields is discussed. 14 refs

  9. Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Anisenkov, A; Belov, S; Kaplin, V; Korol, A; Skovpen, K; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2012-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.

  10. NREL Receives Editors' Choice Awards for Supercomputer Research | News |

    Science.gov (United States)

    performance data center, high-bay labs, and office space. NREL's Martha Symko-Davies honored by Women in successful women working in the energy field. As NREL's Director of Partnerships for Energy Systems awards for the Peregrine high-performance computer and the groundbreaking research it made possible. The

  11. Adventures in supercomputing: An innovative program for high school teachers

    Energy Technology Data Exchange (ETDEWEB)

    Oliver, C.E.; Hicks, H.R.; Summers, B.G. [Oak Ridge National Lab., TN (United States); Staten, D.G. [Wartburg Central High School, TN (United States)

    1994-12-31

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology. Adventures in Supercomputing (AiS), sponsored by the U.S. Department of Energy (DOE), is such a program. It is a program for high school teachers that changes the teacher paradigm from a teacher-directed approach of teaching to a student-centered approach. {open_quotes}A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode{close_quotes}. Not only is the process of teaching changed, but the cross-curricula integration within the AiS materials is remarkable. Written from a teacher`s perspective, this paper will describe the AiS program and its effects on teachers and students, primarily at Wartburg Central High School, in Wartburg, Tennessee. The AiS program in Tennessee is sponsored by Oak Ridge National Laboratory (ORNL).

  12. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  13. Computational plasma physics and supercomputers

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1984-09-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular codes, but parallel processing poses new coding difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematics

  14. Mistral Supercomputer Job History Analysis

    OpenAIRE

    Zasadziński, Michał; Muntés-Mulero, Victor; Solé, Marc; Ludwig, Thomas

    2018-01-01

    In this technical report, we show insights and results of operational data analysis from petascale supercomputer Mistral, which is ranked as 42nd most powerful in the world as of January 2018. Data sources include hardware monitoring data, job scheduler history, topology, and hardware information. We explore job state sequences, spatial distribution, and electric power patterns.

  15. Interactive real-time nuclear plant simulations on a UNIX based supercomputer

    International Nuclear Information System (INIS)

    Behling, S.R.

    1990-01-01

    Interactive real-time nuclear plant simulations are critically important to train nuclear power plant engineers and operators. In addition, real-time simulations can be used to test the validity and timing of plant technical specifications and operational procedures. To accurately and confidently simulate a nuclear power plant transient in real-time, sufficient computer resources must be available. Since some important transients cannot be simulated using preprogrammed responses or non-physical models, commonly used simulation techniques may not be adequate. However, the power of a supercomputer allows one to accurately calculate the behavior of nuclear power plants even during very complex transients. Many of these transients can be calculated in real-time or quicker on the fastest supercomputers. The concept of running interactive real-time nuclear power plant transients on a supercomputer has been tested. This paper describes the architecture of the simulation program, the techniques used to establish real-time synchronization, and other issues related to the use of supercomputers in a new and potentially very important area. (author)

  16. Research and development of grid computing technology in center for computational science and e-systems of Japan Atomic Energy Agency

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2007-01-01

    Center for Computational Science and E-systems of the Japan Atomic Energy Agency (CCSE/JAEA) has carried out R and D of grid computing technology. Since 1995, R and D to realize computational assistance for researchers called Seamless Thinking Aid (STA) and then to share intellectual resources called Information Technology Based Laboratory (ITBL) have been conducted, leading to construct an intelligent infrastructure for the atomic energy research called Atomic Energy Grid InfraStructure (AEGIS) under the Japanese national project 'Development and Applications of Advanced High-Performance Supercomputer'. It aims to enable synchronization of three themes: 1) Computer-Aided Research and Development (CARD) to realize and environment for STA, 2) Computer-Aided Engineering (CAEN) to establish Multi Experimental Tools (MEXT), and 3) Computer Aided Science (CASC) to promote the Atomic Energy Research and Investigation (AERI). This article reviewed achievements in R and D of grid computing technology so far obtained. (T. Tanaka)

  17. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy; Katz, Daniel S.; Binkowski, T. Andrew; Zhong, Xiaoliang; Heinonen, Olle; Karpeyev, Dmitry; Wilde, Michael

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt's sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.

  18. Atmosphere of Freedom: Sixty Years at the NASA Ames Research Center

    Science.gov (United States)

    Bugos, Glenn E.; Launius, Roger (Technical Monitor)

    2000-01-01

    Throughout Ames History, four themes prevail: a commitment to hiring the best people; cutting-edge research tools; project management that gets things done faster, better and cheaper; and outstanding research efforts that serve the scientific professions and the nation. More than any other NASA Center, Ames remains shaped by its origins in the NACA (National Advisory Committee for Aeronautics). Not that its missions remain the same. Sure, Ames still houses the world's greatest collection of wind tunnels and simulation facilities, its aerodynamicists remain among the best in the world, and pilots and engineers still come for advice on how to build better aircraft. But that is increasingly part of Ames' past. Ames people have embraced two other missions for its future. First, intelligent systems and information science will help NASA use new tools in supercomputing, networking, telepresence and robotics. Second, astrobiology will explore lore the prospects for life on Earth and beyond. Both new missions leverage Ames long-standing expertise in computation and in the life sciences, as well as its relations with the computing and biotechnology firms working in the Silicon Valley community that has sprung up around the Center. Rather than the NACA missions, it is the NACA culture that still permeates Ames. The Ames way of research management privileges the scientists and engineers working in the laboratories. They work in an atmosphere of freedom, laced with the expectation of integrity and responsibility. Ames researchers are free to define their research goals and define how they contribute to the national good. They are expected to keep their fingers on the pulse of their disciplines, to be ambitious yet frugal in organizing their efforts, and to always test their theories in the laboratory or in the field. Ames' leadership ranks, traditionally, are cultivated within this scientific community. Rather than manage and supervise these researchers, Ames leadership merely

  19. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  20. Introduction to Reconfigurable Supercomputing

    CERN Document Server

    Lanzagorta, Marco; Rosenberg, Robert

    2010-01-01

    This book covers technologies, applications, tools, languages, procedures, advantages, and disadvantages of reconfigurable supercomputing using Field Programmable Gate Arrays (FPGAs). The target audience is the community of users of High Performance Computers (HPe who may benefit from porting their applications into a reconfigurable environment. As such, this book is intended to guide the HPC user through the many algorithmic considerations, hardware alternatives, usability issues, programming languages, and design tools that need to be understood before embarking on the creation of reconfigur

  1. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-05-15

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation.

  2. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    International Nuclear Information System (INIS)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho

    2008-01-01

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation

  3. SUPERCOMPUTERS FOR AIDING ECONOMIC PROCESSES WITH REFERENCE TO THE FINANCIAL SECTOR

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2014-12-01

    Full Text Available The article discusses the use of supercomputers to support business processes with particular emphasis on the financial sector. A reference was made to the selected projects that support economic development. In particular, we propose the use of supercomputers to perform artificial intel-ligence methods in banking. The proposed methods combined with modern technology enables a significant increase in the competitiveness of enterprises and banks by adding new functionality.

  4. Holistic Approach to Data Center Energy Efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Steven W [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-18

    This presentation discusses NREL's Energy System Integrations Facility and NREL's holistic design approach to sustainable data centers that led to the world's most energy-efficient data center. It describes Peregrine, a warm water liquid cooled supercomputer, waste heat reuse in the data center, demonstrated PUE and ERE, and lessons learned during four years of operation.

  5. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sarje, Abhinav [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jacobsen, Douglas W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ringler, Todd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  6. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  7. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver; Wu, Kesheng

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports to slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.

  8. Current state and future direction of computer systems at NASA Langley Research Center

    Science.gov (United States)

    Rogers, James L. (Editor); Tucker, Jerry H. (Editor)

    1992-01-01

    Computer systems have advanced at a rate unmatched by any other area of technology. As performance has dramatically increased there has been an equally dramatic reduction in cost. This constant cost performance improvement has precipitated the pervasiveness of computer systems into virtually all areas of technology. This improvement is due primarily to advances in microelectronics. Most people are now convinced that the new generation of supercomputers will be built using a large number (possibly thousands) of high performance microprocessors. Although the spectacular improvements in computer systems have come about because of these hardware advances, there has also been a steady improvement in software techniques. In an effort to understand how these hardware and software advances will effect research at NASA LaRC, the Computer Systems Technical Committee drafted this white paper to examine the current state and possible future directions of computer systems at the Center. This paper discusses selected important areas of computer systems including real-time systems, embedded systems, high performance computing, distributed computing networks, data acquisition systems, artificial intelligence, and visualization.

  9. Evaluating the networking characteristics of the Cray XC-40 Intel Knights Landing-based Cori supercomputer at NERSC

    Energy Technology Data Exchange (ETDEWEB)

    Doerfler, Douglas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Austin, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cook, Brandon [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kandalla, Krishna [Cray Inc, Bloomington, MN (United States); Mendygral, Peter [Cray Inc, Bloomington, MN (United States)

    2017-09-12

    There are many potential issues associated with deploying the Intel Xeon Phi™ (code named Knights Landing [KNL]) manycore processor in a large-scale supercomputer. One in particular is the ability to fully utilize the high-speed communications network, given that the serial performance of a Xeon Phi TM core is a fraction of a Xeon®core. In this paper, we take a look at the trade-offs associated with allocating enough cores to fully utilize the Aries high-speed network versus cores dedicated to computation, e.g., the trade-off between MPI and OpenMP. In addition, we evaluate new features of Cray MPI in support of KNL, such as internode optimizations. We also evaluate one-sided programming models such as Unified Parallel C. We quantify the impact of the above trade-offs and features using a suite of National Energy Research Scientific Computing Center applications.

  10. A visual analytics system for optimizing the performance of large-scale networks in supercomputing systems

    Directory of Open Access Journals (Sweden)

    Takanori Fujiwara

    2018-03-01

    Full Text Available The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects. Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology. It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices, such as job scheduling and routing strategies. However, in order to study these temporal network behavior, we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly’s multi-level hierarchies. This paper presents such a tool–a visual analytics system–that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer. We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations. Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies, which effectively helps visual analysis tasks. We demonstrate the effectiveness of the system with a set of case studies. Our system and findings can not only help improve the communication performance of supercomputing applications, but also the network performance of next-generation supercomputers. Keywords: Supercomputing, Parallel communication network, Dragonfly networks, Time-series data, Performance analysis, Visual analytics

  11. KfK-seminar series on supercomputing und visualization from May till September 1992

    International Nuclear Information System (INIS)

    Hohenhinnebusch, W.

    1993-05-01

    During the period of may 1992 to september 1992 a series of seminars was held at KfK on several topics of supercomputing in different fields of application. The aim was to demonstrate the importance of supercomputing and visualization in numerical simulations of complex physical and technical phenomena. This report contains the collection of all submitted seminar papers. (orig./HP) [de

  12. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Directory of Open Access Journals (Sweden)

    De K.

    2016-01-01

    Full Text Available The Large Hadron Collider (LHC, operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed AnalysisWorkload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF, is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF’s Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  13. Tehran Nuclear Research Center

    International Nuclear Information System (INIS)

    Taherzadeh, M.

    1977-01-01

    The Tehran Nuclear Research Center was formerly managed by the University of Tehran. This Center, after its transformation to the AEOI, has now become a focal point for basic research in the area of Nuclear Energy in Iran

  14. Research to application: Supercomputing trends for the 90's - Opportunities for interdisciplinary computations

    International Nuclear Information System (INIS)

    Shankar, V.

    1991-01-01

    The progression of supercomputing is reviewed from the point of view of computational fluid dynamics (CFD), and multidisciplinary problems impacting the design of advanced aerospace configurations are addressed. The application of full potential and Euler equations to transonic and supersonic problems in the 70s and early 80s is outlined, along with Navier-Stokes computations widespread during the late 80s and early 90s. Multidisciplinary computations currently in progress are discussed, including CFD and aeroelastic coupling for both static and dynamic flexible computations, CFD, aeroelastic, and controls coupling for flutter suppression and active control, and the development of a computational electromagnetics technology based on CFD methods. Attention is given to computational challenges standing in a way of the concept of establishing a computational environment including many technologies. 40 refs

  15. Research and technology, 1991. Langley Research Center

    Science.gov (United States)

    1992-01-01

    The mission of the NASA Langley Research Center is to increase the knowledge and capability of the United States in a full range of aeronautics disciplines and in selected space disciplines. This mission will be accomplished by performing innovative research relevant to national needs and Agency goals, transferring technology to users in a timely manner, and providing development support to other United States Government agencies, industry, and other NASA centers. Highlights are given of the major accomplishments and applications that have been made during the past year. The highlights illustrate both the broad range of the research and technology (R&T) activities at NASA Langley Research Center and the contributions of this work toward maintaining United States leadership in aeronautics and space research.

  16. Computational plasma physics and supercomputers. Revision 1

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1985-01-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular models, but parallel processing poses new programming difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematical models

  17. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  18. Quantum Hamiltonian Physics with Supercomputers

    International Nuclear Information System (INIS)

    Vary, James P.

    2014-01-01

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed

  19. Quantum Hamiltonian Physics with Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Vary, James P.

    2014-06-15

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed.

  20. Lectures in Supercomputational Neurosciences Dynamics in Complex Brain Networks

    CERN Document Server

    Graben, Peter beim; Thiel, Marco; Kurths, Jürgen

    2008-01-01

    Computational Neuroscience is a burgeoning field of research where only the combined effort of neuroscientists, biologists, psychologists, physicists, mathematicians, computer scientists, engineers and other specialists, e.g. from linguistics and medicine, seem to be able to expand the limits of our knowledge. The present volume is an introduction, largely from the physicists' perspective, to the subject matter with in-depth contributions by system neuroscientists. A conceptual model for complex networks of neurons is introduced that incorporates many important features of the real brain, such as various types of neurons, various brain areas, inhibitory and excitatory coupling and the plasticity of the network. The computational implementation on supercomputers, which is introduced and discussed in detail in this book, will enable the readers to modify and adapt the algortihm for their own research. Worked-out examples of applications are presented for networks of Morris-Lecar neurons to model the cortical co...

  1. PROCEEDINGS OF RIKEN BNL RESEARCH CENTER WORKSHOP, VOLUME 77, RBRC SCIENTIFIC REVIEW COMMITTEE MEETING, OCTOBER 10-12, 2005

    International Nuclear Information System (INIS)

    SAMIOS, N.P.

    2005-01-01

    The eighth evaluation of the RIKEN BNL Research Center (RBRC) took place on October 10-12, 2005, at Brookhaven National Laboratory. The members of the Scientific Review Committee (SRC) were Dr. Jean-Paul Blaizot, Professor Makoto Kobayashi, Dr. Akira Masaike, Professor Charles Young Prescott (Chair), Professor Stephen Sharpe (absent), and Professor Jack Sandweiss. We are grateful to Professor Akira Ukawa who was appointed to the SRC to cover Professor Sharpe's area of expertise. In addition to reviewing this year's program, the committee, augmented by Professor Kozi Nakai, evaluated the RBRC proposal for a five-year extension of the RIKEN BNL Collaboration MOU beyond 2007. Dr. Koji Kaya, Director of the Discovery Research Institute, RIKEN, Japan, presided over the session on the extension proposal. In order to illustrate the breadth and scope of the RBRC program, each member of the Center made a presentation on higher research efforts. In addition, a special session was held in connection with the RBRC QCDSP and QCDOC supercomputers. Professor Norman H. Christ, a collaborator from Columbia University, gave a presentation on the progress and status of the project, and Professor Frithjof Karsch of BNL presented the first physics results from QCDOC. Although the main purpose of this review is a report to RIKEN Management (Dr. Ryoji Noyori, RIKEN President) on the health, scientific value, management and future prospects of the Center, the RBRC management felt that a compendium of the scientific presentations are of sufficient quality and interest that they warrant a wider distribution. Therefore we have made this compilation and present it to the community for its information and enlightenment

  2. PROCEEDINGS OF RIKEN BNL RESEARCH CENTER WORKSHOP, VOLUME 77, RBRC SCIENTIFIC REVIEW COMMITTEE MEETING, OCTOBER 10-12, 2005

    Energy Technology Data Exchange (ETDEWEB)

    SAMIOS, N.P.

    2005-10-10

    The eighth evaluation of the RIKEN BNL Research Center (RBRC) took place on October 10-12, 2005, at Brookhaven National Laboratory. The members of the Scientific Review Committee (SRC) were Dr. Jean-Paul Blaizot, Professor Makoto Kobayashi, Dr. Akira Masaike, Professor Charles Young Prescott (Chair), Professor Stephen Sharpe (absent), and Professor Jack Sandweiss. We are grateful to Professor Akira Ukawa who was appointed to the SRC to cover Professor Sharpe's area of expertise. In addition to reviewing this year's program, the committee, augmented by Professor Kozi Nakai, evaluated the RBRC proposal for a five-year extension of the RIKEN BNL Collaboration MOU beyond 2007. Dr. Koji Kaya, Director of the Discovery Research Institute, RIKEN, Japan, presided over the session on the extension proposal. In order to illustrate the breadth and scope of the RBRC program, each member of the Center made a presentation on higher research efforts. In addition, a special session was held in connection with the RBRC QCDSP and QCDOC supercomputers. Professor Norman H. Christ, a collaborator from Columbia University, gave a presentation on the progress and status of the project, and Professor Frithjof Karsch of BNL presented the first physics results from QCDOC. Although the main purpose of this review is a report to RIKEN Management (Dr. Ryoji Noyori, RIKEN President) on the health, scientific value, management and future prospects of the Center, the RBRC management felt that a compendium of the scientific presentations are of sufficient quality and interest that they warrant a wider distribution. Therefore we have made this compilation and present it to the community for its information and enlightenment.

  3. Production and Distribution Research Center

    Science.gov (United States)

    1986-05-01

    Steel, Coca Cola , Standard Oil of Ohio, and Martin Marietta have been involved in joint research with members of the Center. The number of Faculty...permitted the establishment of the Center and supports its continuing development. The Center has also received research sponsorship from the Joint...published relating to results developed within the PDRC under Offce of Naval Research sponsorship . These reports are listed in Appendix A. Many of these

  4. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  5. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  6. Center for Prostate Disease Research

    Data.gov (United States)

    Federal Laboratory Consortium — The Center for Prostate Disease Research is the only free-standing prostate cancer research center in the U.S. This 20,000 square foot state-of-the-art basic science...

  7. Nuclear energy research in Germany 2008. Research centers and universities

    International Nuclear Information System (INIS)

    Tromm, Walter

    2009-01-01

    This summary report presents nuclear energy research at research centers and universities in Germany in 2008. Activities are explained on the basis of examples of research projects and a description of the situation of research and teaching in general. Participants are the - Karlsruhe Research Center, - Juelich Research Center (FZJ), - Dresden-Rossendorf Research Center (FZD), - Verein fuer Kernverfahrenstechnik und Analytik Rossendorf e.V. (VKTA), - Technical University of Dresden, - University of Applied Sciences, Zittau/Goerlitz, - Institute for Nuclear Energy and Energy Systems (IKE) at the University of Stuttgart, - Reactor Simulation and Reactor Safety Working Group at the Bochum Ruhr University. (orig.)

  8. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  9. Cellular-automata supercomputers for fluid-dynamics modeling

    International Nuclear Information System (INIS)

    Margolus, N.; Toffoli, T.; Vichniac, G.

    1986-01-01

    We report recent developments in the modeling of fluid dynamics, and give experimental results (including dynamical exponents) obtained using cellular automata machines. Because of their locality and uniformity, cellular automata lend themselves to an extremely efficient physical realization; with a suitable architecture, an amount of hardware resources comparable to that of a home computer can achieve (in the simulation of cellular automata) the performance of a conventional supercomputer

  10. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Esteban [University of Pittsburgh; Ni, Xiang [University of Illinois at Urbana-Champaign; Jones, Terry R [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  11. Engineer Research and Development Center's Materials Testing Center (MTC)

    Data.gov (United States)

    Federal Laboratory Consortium — The Engineer Research and Development Center's Materials Testing Center (MTC) is committed to quality testing and inspection services that are delivered on time and...

  12. NASA Center for Climate Simulation (NCCS) Presentation

    Science.gov (United States)

    Webster, William P.

    2012-01-01

    The NASA Center for Climate Simulation (NCCS) offers integrated supercomputing, visualization, and data interaction technologies to enhance NASA's weather and climate prediction capabilities. It serves hundreds of users at NASA Goddard Space Flight Center, as well as other NASA centers, laboratories, and universities across the US. Over the past year, NCCS has continued expanding its data-centric computing environment to meet the increasingly data-intensive challenges of climate science. We doubled our Discover supercomputer's peak performance to more than 800 teraflops by adding 7,680 Intel Xeon Sandy Bridge processor-cores and most recently 240 Intel Xeon Phi Many Integrated Core (MIG) co-processors. A supercomputing-class analysis system named Dali gives users rapid access to their data on Discover and high-performance software including the Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT), with interfaces from user desktops and a 17- by 6-foot visualization wall. NCCS also is exploring highly efficient climate data services and management with a new MapReduce/Hadoop cluster while augmenting its data distribution to the science community. Using NCCS resources, NASA completed its modeling contributions to the Intergovernmental Panel on Climate Change (IPCG) Fifth Assessment Report this summer as part of the ongoing Coupled Modellntercomparison Project Phase 5 (CMIP5). Ensembles of simulations run on Discover reached back to the year 1000 to test model accuracy and projected climate change through the year 2300 based on four different scenarios of greenhouse gases, aerosols, and land use. The data resulting from several thousand IPCC/CMIP5 simulations, as well as a variety of other simulation, reanalysis, and observationdatasets, are available to scientists and decision makers through an enhanced NCCS Earth System Grid Federation Gateway. Worldwide downloads have totaled over 110 terabytes of data.

  13. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  14. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    Science.gov (United States)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  15. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    International Nuclear Information System (INIS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-01-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers. (paper)

  16. Plane-wave electronic structure calculations on a parallel supercomputer

    International Nuclear Information System (INIS)

    Nelson, J.S.; Plimpton, S.J.; Sears, M.P.

    1993-01-01

    The development of iterative solutions of Schrodinger's equation in a plane-wave (pw) basis over the last several years has coincided with great advances in the computational power available for performing the calculations. These dual developments have enabled many new and interesting condensed matter phenomena to be studied from a first-principles approach. The authors present a detailed description of the implementation on a parallel supercomputer (hypercube) of the first-order equation-of-motion solution to Schrodinger's equation, using plane-wave basis functions and ab initio separable pseudopotentials. By distributing the plane-waves across the processors of the hypercube many of the computations can be performed in parallel, resulting in decreases in the overall computation time relative to conventional vector supercomputers. This partitioning also provides ample memory for large Fast Fourier Transform (FFT) meshes and the storage of plane-wave coefficients for many hundreds of energy bands. The usefulness of the parallel techniques is demonstrated by benchmark timings for both the FFT's and iterations of the self-consistent solution of Schrodinger's equation for different sized Si unit cells of up to 512 atoms

  17. Problem solving in nuclear engineering using supercomputers

    International Nuclear Information System (INIS)

    Schmidt, F.; Scheuermann, W.; Schatz, A.

    1987-01-01

    The availability of supercomputers enables the engineer to formulate new strategies for problem solving. One such strategy is the Integrated Planning and Simulation System (IPSS). With the integrated systems, simulation models with greater consistency and good agreement with actual plant data can be effectively realized. In the present work some of the basic ideas of IPSS are described as well as some of the conditions necessary to build such systems. Hardware and software characteristics as realized are outlined. (orig.) [de

  18. Visualizing quantum scattering on the CM-2 supercomputer

    International Nuclear Information System (INIS)

    Richardson, J.L.

    1991-01-01

    We implement parallel algorithms for solving the time-dependent Schroedinger equation on the CM-2 supercomputer. These methods are unconditionally stable as well as unitary at each time step and have the advantage of being spatially local and explicit. We show how to visualize the dynamics of quantum scattering using techniques for visualizing complex wave functions. Several scattering problems are solved to demonstrate the use of these methods. (orig.)

  19. NASA's engineering research centers and interdisciplinary education

    Science.gov (United States)

    Johnston, Gordon I.

    1990-01-01

    A new program of interactive education between NASA and the academic community aims to improve research and education, provide long-term, stable funding, and support cross-disciplinary and multi-disciplinary research. The mission of NASA's Office of Aeronautics, Exploration and Technology (OAET) is discussed and it is pointed out that the OAET conducts about 10 percent of its total R&D program at U.S. universities. Other NASA university-based programs are listed including the Office of Commercial Programs Centers for the Commercial Development of Space (CCDS) and the National Space Grant program. The importance of university space engineering centers and the selection of the nine current centers are discussed. A detailed composite description is provided of the University Space Engineering Research Centers. Other specialized centers are described such as the Center for Space Construction, the Mars Mission Research Center, and the Center for Intelligent Robotic Systems for Space Exploration. Approaches to educational outreach are discussed.

  20. Summaries of research and development activities by using JAEA computer system in FY2007. April 1, 2007 - March 31, 2008

    International Nuclear Information System (INIS)

    2008-11-01

    Center for Computational Science and e-Systems (CCSE) of Japan Atomic Energy Agency (JAEA) installed large computer systems including super-computers in order to support research and development activities in JAEA. This report presents usage records of the JAEA computer system and the big users' research and development activities by using the computer system in FY2007 (April 1, 2007 - March 31, 2008). (author)

  1. Summaries of research and development activities by using JAEA computer system in FY2009. April 1, 2009 - March 31, 2010

    International Nuclear Information System (INIS)

    2010-11-01

    Center for Computational Science and e-Systems (CCSE) of Japan Atomic Energy Agency (JAEA) installed large computer systems including super-computers in order to support research and development activities in JAEA. This report presents usage records of the JAEA computer system and the big users' research and development activities by using the computer system in FY2009 (April 1, 2009 - March 31, 2010). (author)

  2. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    AUTHOR|(SzGeCERN)643806; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Wenaus, Torre

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for jo...

  3. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  4. Supercomputer algorithms for reactivity, dynamics and kinetics of small molecules

    International Nuclear Information System (INIS)

    Lagana, A.

    1989-01-01

    Even for small systems, the accurate characterization of reactive processes is so demanding of computer resources as to suggest the use of supercomputers having vector and parallel facilities. The full advantages of vector and parallel architectures can sometimes be obtained by simply modifying existing programs, vectorizing the manipulation of vectors and matrices, and requiring the parallel execution of independent tasks. More often, however, a significant time saving can be obtained only when the computer code undergoes a deeper restructuring, requiring a change in the computational strategy or, more radically, the adoption of a different theoretical treatment. This book discusses supercomputer strategies based upon act and approximate methods aimed at calculating the electronic structure and the reactive properties of small systems. The book shows how, in recent years, intense design activity has led to the ability to calculate accurate electronic structures for reactive systems, exact and high-level approximations to three-dimensional reactive dynamics, and to efficient directive and declaratory software for the modelling of complex systems

  5. Center for Computing Research Summer Research Proceedings 2015.

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, Andrew Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Parks, Michael L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-18

    The Center for Computing Research (CCR) at Sandia National Laboratories organizes a summer student program each summer, in coordination with the Computer Science Research Institute (CSRI) and Cyber Engineering Research Institute (CERI).

  6. Summaries of research and development activities by using supercomputer system of JAEA in FY2015. April 1, 2015 - March 31, 2016

    International Nuclear Information System (INIS)

    2017-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As shown in the fact that about 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology. In FY2015, the system was used for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue, as well as for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great number of R and D results accomplished by using the system in FY2015, as well as user support, operational records and overviews of the system, and so on. (author)

  7. Summaries of research and development activities by using supercomputer system of JAEA in FY2014. April 1, 2014 - March 31, 2015

    International Nuclear Information System (INIS)

    2016-02-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As shown in the fact that about 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology. In FY2014, the system was used for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue, as well as for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great number of R and D results accomplished by using the system in FY2014, as well as user support, operational records and overviews of the system, and so on. (author)

  8. Summaries of research and development activities by using supercomputer system of JAEA in FY2013. April 1, 2013 - March 31, 2014

    International Nuclear Information System (INIS)

    2015-02-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. About 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2013, the system was used not only for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science, but also for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue. This report presents a great amount of R and D results accomplished by using the system in FY2013, as well as user support, operational records and overviews of the system, and so on. (author)

  9. Summaries of research and development activities by using supercomputer system of JAEA in FY2012. April 1, 2012 - March 31, 2013

    International Nuclear Information System (INIS)

    2014-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As more than 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2012, the system was used not only for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science, but also for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as apriority issue. This report presents a great amount of R and D results accomplished by using the system in FY2012, as well as user support, operational records and overviews of the system, and so on. (author)

  10. Summaries of research and development activities by using supercomputer system of JAEA in FY2011. April 1, 2011 - March 31, 2012

    International Nuclear Information System (INIS)

    2013-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As more than 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2011, the system was used for analyses of the accident at the Fukushima Daiichi Nuclear Power Station and establishment of radioactive decontamination plan, as well as the JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great amount of R and D results accomplished by using the system in FY2011, as well as user support structure, operational records and overviews of the system, and so on. (author)

  11. Summaries of research and development activities by using JAERI computer system in FY2003. April 1, 2003 - March 31, 2004

    International Nuclear Information System (INIS)

    2005-03-01

    Center for Promotion of Computational Science and Engineering (CCSE) of Japan Atomic Energy Research Institute (JAERI) installed large computer system included super-computers in order to support research and development activities in JAERI. CCSE operates and manages the computer system and network system. This report presents usage records of the JAERI computer system and big user's research and development activities by using the computer system in FY2003 (April 1, 2003 - March 31, 2004). (author)

  12. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; D' Azevedo, Eduardo [ORNL; Philip, Bobby [ORNL; Worley, Patrick H [ORNL

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  13. Center for Information Systems Research Research Briefings 2002

    OpenAIRE

    ROSS, JEANNE W.

    2003-01-01

    This paper is comprised of research briefings from the MIT Sloan School of Management's Center for Information Systems Research (CISR). CISR's mission is to perform practical empirical research on how firms generate business value from IT.

  14. Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Supercomputing plays a major role in many areas of science and engineering, and it has had tremendous impact for decades in areas such as aerospace, defense, energy,...

  15. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  16. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  17. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  18. Plasma turbulence calculations on supercomputers

    International Nuclear Information System (INIS)

    Carreras, B.A.; Charlton, L.A.; Dominguez, N.; Drake, J.B.; Garcia, L.; Leboeuf, J.N.; Lee, D.K.; Lynch, V.E.; Sidikman, K.

    1991-01-01

    Although the single-particle picture of magnetic confinement is helpful in understanding some basic physics of plasma confinement, it does not give a full description. Collective effects dominate plasma behavior. Any analysis of plasma confinement requires a self-consistent treatment of the particles and fields. The general picture is further complicated because the plasma, in general, is turbulent. The study of fluid turbulence is a rather complex field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples back to the fluid. Since the fluid is not a perfect conductor, this turbulence can lead to changes in the topology of the magnetic field structure, causing the magnetic field lines to wander radially. Because the plasma fluid flows along field lines, they carry the particles with them, and this enhances the losses caused by collisions. The changes in topology are critical for the plasma confinement. The study of plasma turbulence and the concomitant transport is a challenging problem. Because of the importance of solving the plasma turbulence problem for controlled thermonuclear research, the high complexity of the problem, and the necessity of attacking the problem with supercomputers, the study of plasma turbulence in magnetic confinement devices is a Grand Challenge problem

  19. Supercomputers and the future of computational atomic scattering physics

    International Nuclear Information System (INIS)

    Younger, S.M.

    1989-01-01

    The advent of the supercomputer has opened new vistas for the computational atomic physicist. Problems of hitherto unparalleled complexity are now being examined using these new machines, and important connections with other fields of physics are being established. This talk briefly reviews some of the most important trends in computational scattering physics and suggests some exciting possibilities for the future. 7 refs., 2 figs

  20. Re-inventing electromagnetics - Supercomputing solution of Maxwell's equations via direct time integration on space grids

    International Nuclear Information System (INIS)

    Taflove, A.

    1992-01-01

    This paper summarizes the present state and future directions of applying finite-difference and finite-volume time-domain techniques for Maxwell's equations on supercomputers to model complex electromagnetic wave interactions with structures. Applications so far have been dominated by radar cross section technology, but by no means are limited to this area. In fact, the gains we have made place us on the threshold of being able to make tremendous contributions to non-defense electronics and optical technology. Some of the most interesting research in these commercial areas is summarized. 47 refs

  1. Center for Rehabilitation Sciences Research

    Data.gov (United States)

    Federal Laboratory Consortium — The Center for Rehabilitation Sciences Research (CRSR) was established as a research organization to promote successful return to duty and community reintegration of...

  2. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk (Kitware, Inc., Clifton Park, NY); Fabian, Nathan; Marion, Patrick (Kitware, Inc., Clifton Park, NY); Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  3. Water Resources Research Center

    Science.gov (United States)

    Untitled Document  Search Welcome to the University of Hawai'i at Manoa Water Resources Research Center At WRRC we concentrate on addressing the unique water and wastewater management problems and issues elsewhere by researching water-related issues distinctive to these areas. We are Hawaii's link in a network

  4. Micro-mechanical Simulations of Soils using Massively Parallel Supercomputers

    Directory of Open Access Journals (Sweden)

    David W. Washington

    2004-06-01

    Full Text Available In this research a computer program, Trubal version 1.51, based on the Discrete Element Method was converted to run on a Connection Machine (CM-5,a massively parallel supercomputer with 512 nodes, to expedite the computational times of simulating Geotechnical boundary value problems. The dynamic memory algorithm in Trubal program did not perform efficiently in CM-2 machine with the Single Instruction Multiple Data (SIMD architecture. This was due to the communication overhead involving global array reductions, global array broadcast and random data movement. Therefore, a dynamic memory algorithm in Trubal program was converted to a static memory arrangement and Trubal program was successfully converted to run on CM-5 machines. The converted program was called "TRUBAL for Parallel Machines (TPM." Simulating two physical triaxial experiments and comparing simulation results with Trubal simulations validated the TPM program. With a 512 nodes CM-5 machine TPM produced a nine-fold speedup demonstrating the inherent parallelism within algorithms based on the Discrete Element Method.

  5. De Novo Ultrascale Atomistic Simulations On High-End Parallel Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, A; Kalia, R K; Nomura, K; Sharma, A; Vashishta, P; Shimojo, F; van Duin, A; Goddard, III, W A; Biswas, R; Srivastava, D; Yang, L H

    2006-09-04

    /MD simulation on a Grid consisting of 6 supercomputer centers in the US and Japan (in total of 150 thousand processor-hours), in which the number of processors change dynamically on demand and resources are allocated and migrated dynamically in response to faults. Furthermore, performance portability has been demonstrated on a wide range of platforms such as BlueGene/L, Altix 3000, and AMD Opteron-based Linux clusters.

  6. Summaries of research and development activities by using JAERI computer system in FY2004 (April 1, 2004 - March 31, 2005)

    International Nuclear Information System (INIS)

    2005-08-01

    Center for Promotion of Computational Science and Engineering (CCSE) of Japan Atomic Energy Research Institute (JAERI) installed large computer systems including super-computers in order to support research and development activities in JAERI. CCSE operates and manages the computer system and network system. This report presents usage records of the JAERI computer system and the big users' research and development activities by using the computer system in FY2004 (April 1, 2004 - March 31, 2005). (author)

  7. High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers

    Science.gov (United States)

    Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.

    2017-12-01

    The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and

  8. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen-Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2018-05-15

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time and supports DMA functionality allowing for parallel processing message-passing.

  9. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 2

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  10. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 1

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  11. National Rehabilitation Hospital Assistive Technology Research Center

    Science.gov (United States)

    1995-10-01

    Shoulder-Arm Orthoses Several years ago, the Rehabilitation Engineering Research Center (RERC) on Rehabilitation Robotics in Delaware1 identified a... exoskeletal applications for persons with disabilities. 2. Create a center of expertise in rehabilitation technology transfer that benefits persons with...AD COOPERATIVE AGREEMENT NUMBER: DAMD17-94-V-4036 TITLE: National Rehabilitation Hospital Assistive Technology- Research Center PRINCIPAL

  12. Summaries of research and development activities by using JAEA computer system in FY2005. April 1, 2005 - March, 31, 2006

    International Nuclear Information System (INIS)

    2006-10-01

    Center for Promotion of Computational Science and Engineering (CCSE) of Japan Atomic Energy Agency (JAEA) installed large computer systems including super-computers in order to support research and development activities in JAEA. CCSE operates and manages the computer system and network system. This report presents usage records of the JAERI computer system and the big users' research and development activities by using the computer system in FY2005 (April 1, 2005 - March 31, 2006). (author)

  13. Summaries of research and development activities by using JAEA computer system in FY2006. April 1, 2006 - March 31, 2007

    International Nuclear Information System (INIS)

    2008-02-01

    Center for Promotion of Computational Science and Engineering (CCSE) of Japan Atomic Energy Agency (JAEA) installed large computer systems including super-computers in order to support research and development activities in JAEA. CCSE operates and manages the computer system and network system. This report presents usage records of the JAEA computer system and the big users' research and development activities by using the computer system in FY2006 (April 1, 2006 - March 31, 2007). (author)

  14. NASA Langley Research Center outreach in astronautical education

    Science.gov (United States)

    Duberg, J. E.

    1976-01-01

    The Langley Research Center has traditionally maintained an active relationship with the academic community, especially at the graduate level, to promote the Center's research program and to make graduate education available to its staff. Two new institutes at the Center - the Joint Institute for Acoustics and Flight Sciences, and the Institute for Computer Applications - are discussed. Both provide for research activity at the Center by university faculties. The American Society of Engineering Education Summer Faculty Fellowship Program and the NASA-NRC Postdoctoral Resident Research Associateship Program are also discussed.

  15. Design and performance characterization of electronic structure calculations on massively parallel supercomputers

    DEFF Research Database (Denmark)

    Romero, N. A.; Glinsvad, Christian; Larsen, Ask Hjorth

    2013-01-01

    Density function theory (DFT) is the most widely employed electronic structure method because of its favorable scaling with system size and accuracy for a broad range of molecular and condensed-phase systems. The advent of massively parallel supercomputers has enhanced the scientific community...

  16. University of Kentucky Center for Applied Energy Research

    Science.gov (United States)

    University of Kentucky Center for Applied Energy Research Search Help Research Our Expertise University of Kentucky Center for Applied Energy Research | An Equal Opportunity University All Rights Remediation Power Generation CAER TechFacts CAER Factsheets CAER Affiliations Research Contacts Publications

  17. Aviation Research and the Internet

    Science.gov (United States)

    Scott, Antoinette M.

    1995-01-01

    The Internet is a network of networks. It was originally funded by the Defense Advanced Research Projects Agency or DOD/DARPA and evolved in part from the connection of supercomputer sites across the United States. The National Science Foundation (NSF) made the most of their supercomputers by connecting the sites to each other. This made the supercomputers more efficient and now allows scientists, engineers and researchers to access the supercomputers from their own labs and offices. The high speed networks that connect the NSF supercomputers form the backbone of the Internet. The World Wide Web (WWW) is a menu system. It gathers Internet resources from all over the world into a series of screens that appear on your computer. The WWW is also a distributed. The distributed system stores data information on many computers (servers). These servers can go out and get data when you ask for it. Hypermedia is the base of the WWW. One can 'click' on a section and visit other hypermedia (pages). Our approach to demonstrating the importance of aviation research through the Internet began with learning how to put pages on the Internet (on-line) ourselves. We were assigned two aviation companies; Vision Micro Systems Inc. and Innovative Aerodynamic Technologies (IAT). We developed home pages for these SBIR companies. The equipment used to create the pages were the UNIX and Macintosh machines. HTML Supertext software was used to write the pages and the Sharp JX600S scanner to scan the images. As a result, with the use of the UNIX, Macintosh, Sun, PC, and AXIL machines, we were able to present our home pages to over 800,000 visitors.

  18. Annual report of Research Center for Nuclear Physics, Osaka University. 1997 (April 1, 1997-March 31, 1998)

    International Nuclear Information System (INIS)

    Toki, Hiroshi; Sakai, Tsutomu; Hirata, Maiko

    1998-01-01

    Research Center for Nuclear Physics (RCNP) is the national center of nuclear physics in Japan, which is a laboratory complex of the cyclotron laboratory, the laser electron photon laboratory, and the Oto underground laboratory and aims at studies of nucleon meson nuclear physics and quark lepton nuclear physics. In the cyclotron laboratory, AVF/Ring cyclotron complex provides high quality beams of polarized protons and light ions in the medium energy region. Experimental studies have extensively been carried out on nucleon meson nuclear physics. The subjects studied include the nucleon mass and the nuclear interaction in nuclear medium, nuclear spin isospin motions and nuclear responses for neutrinos, pions and isobars interactions, medium energy nuclear reactions of light heavy ions, medical applications, and so on. The Oto Cosmo Observatory is the low background underground laboratory for lepton nuclear physics, and is used for applied science. The laser photon laboratory is used to study quark nuclear physics by means of the multi-GeV laser electron photon beam, and will be ready in the academic year of 1998 to be used for studying quark gluon structures and low-energy QCD. The accelerator researches and developments are being carried out for the new future plan of the multi-GeV electron proton collider. Theoretical works on nuclear particle physics have extensively been made by the RCNP theory groups and laser groups. Computer, network and DAQ systems, including the supercomputer system and the new generation network, have been developed. In this report, 25 reports of nuclear physics, 8 reports of lepton nuclear physics, 1 report of quark nuclear physics, and 2 reports of interdisciplinary physics are described in the experimental nuclear physics. And, 16 reports of quark nuclear physics, 9 reports of intermediate nuclear physics, 19 reports of nuclear physics, and 1 report of miscellaneous are described in the theoretical physics. (G.K.)

  19. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  20. 70 Years of Aeropropulsion Research at NASA Glenn Research Center

    Science.gov (United States)

    Reddy, Dhanireddy R.

    2013-01-01

    This paper presents a brief overview of air-breathing propulsion research conducted at the NASA Glenn Research Center (GRC) over the past 70 years. It includes a historical perspective of the center and its various stages of propulsion research in response to the countrys different periods of crises and growth opportunities. GRCs research and technology development covered a broad spectrum, from a short-term focus on improving the energy efficiency of aircraft engines to advancing the frontier technologies of high-speed aviation in the supersonic and hypersonic speed regimes. This paper highlights major research programs, showing their impact on industry and aircraft propulsion, and briefly discusses current research programs and future aeropropulsion technology trends in related areas

  1. Colorado Learning Disabilities Research Center.

    Science.gov (United States)

    DeFries, J. C.; And Others

    1997-01-01

    Results obtained from the center's six research projects are reviewed, including research on psychometric assessment of twins with reading disabilities, reading and language processes, attention deficit-hyperactivity disorder and executive functions, linkage analysis and physical mapping, computer-based remediation of reading disabilities, and…

  2. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00300320; Klimentov, Alexei; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Vaniachine, Alexandre; Wenaus, Torre; Schovancova, Jaroslava

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real time, information about unused...

  3. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration; Klimentov, Alexei; Oleynik, Danila; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently uses more than 100,000 cores at well over 100 Grid sites with a peak performance of 0.3 petaFLOPS, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real tim...

  4. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    Science.gov (United States)

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  5. Explaining the gap between theoretical peak performance and real performance for supercomputer architectures

    International Nuclear Information System (INIS)

    Schoenauer, W.; Haefner, H.

    1993-01-01

    The basic architectures of vector and parallel computers with their properties are presented. Then the memory size and the arithmetic operations in the context of memory bandwidth are discussed. For the exemplary discussion of a single operation micro-measurements of the vector triad for the IBM 3090 VF and the CRAY Y-MP/8 are presented. They reveal the details of the losses for a single operation. Then we analyze the global performance of a whole supercomputer by identifying reduction factors that bring down the theoretical peak performance to the poor real performance. The responsibilities of the manufacturer and of the user for these losses are dicussed. Then the price-performance ratio for different architectures in a snapshot of January 1991 is briefly mentioned. Finally some remarks to a user-friendly architecture for a supercomputer will be made. (orig.)

  6. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-01-01

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  7. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  8. Parallel processor programs in the Federal Government

    Science.gov (United States)

    Schneck, P. B.; Austin, D.; Squires, S. L.; Lehmann, J.; Mizell, D.; Wallgren, K.

    1985-01-01

    In 1982, a report dealing with the nation's research needs in high-speed computing called for increased access to supercomputing resources for the research community, research in computational mathematics, and increased research in the technology base needed for the next generation of supercomputers. Since that time a number of programs addressing future generations of computers, particularly parallel processors, have been started by U.S. government agencies. The present paper provides a description of the largest government programs in parallel processing. Established in fiscal year 1985 by the Institute for Defense Analyses for the National Security Agency, the Supercomputing Research Center will pursue research to advance the state of the art in supercomputing. Attention is also given to the DOE applied mathematical sciences research program, the NYU Ultracomputer project, the DARPA multiprocessor system architectures program, NSF research on multiprocessor systems, ONR activities in parallel computing, and NASA parallel processor projects.

  9. Final priority; National Institute on Disability and Rehabilitation Research--Disability and Rehabilitation Research Projects and Centers Program--Rehabilitation Engineering Research Centers. Final priority.

    Science.gov (United States)

    2013-06-14

    The Assistant Secretary for Special Education and Rehabilitative Services announces a priority for a Rehabilitation Engineering Research Center (RERC) on Universal Interfaces and Information Technology Access under the Disability and Rehabilitation Research Projects and Centers Program administered by the National Institute on Disability and Rehabilitation Research (NIDRR). The Assistant Secretary may use this priority for a competition in fiscal year (FY) 2013 and later years. We take this action to focus research attention on areas of national need. We intend to use this priority to improve outcomes for individuals with disabilities.

  10. Unique life sciences research facilities at NASA Ames Research Center

    Science.gov (United States)

    Mulenburg, G. M.; Vasques, M.; Caldwell, W. F.; Tucker, J.

    1994-01-01

    The Life Science Division at NASA's Ames Research Center has a suite of specialized facilities that enable scientists to study the effects of gravity on living systems. This paper describes some of these facilities and their use in research. Seven centrifuges, each with its own unique abilities, allow testing of a variety of parameters on test subjects ranging from single cells through hardware to humans. The Vestibular Research Facility allows the study of both centrifugation and linear acceleration on animals and humans. The Biocomputation Center uses computers for 3D reconstruction of physiological systems, and interactive research tools for virtual reality modeling. Psycophysiological, cardiovascular, exercise physiology, and biomechanical studies are conducted in the 12 bed Human Research Facility and samples are analyzed in the certified Central Clinical Laboratory and other laboratories at Ames. Human bedrest, water immersion and lower body negative pressure equipment are also available to study physiological changes associated with weightlessness. These and other weightlessness models are used in specialized laboratories for the study of basic physiological mechanisms, metabolism and cell biology. Visual-motor performance, perception, and adaptation are studied using ground-based models as well as short term weightlessness experiments (parabolic flights). The unique combination of Life Science research facilities, laboratories, and equipment at Ames Research Center are described in detail in relation to their research contributions.

  11. The prevention research centers' managing epilepsy well network.

    Science.gov (United States)

    DiIorio, Colleen K; Bamps, Yvan A; Edwards, Ariele L; Escoffery, Cam; Thompson, Nancy J; Begley, Charles E; Shegog, Ross; Clark, Noreen M; Selwa, Linda; Stoll, Shelley C; Fraser, Robert T; Ciechanowski, Paul; Johnson, Erica K; Kobau, Rosemarie; Price, Patricia H

    2010-11-01

    The Managing Epilepsy Well (MEW) Network was created in 2007 by the Centers for Disease Control and Prevention's (CDC) Prevention Research Centers and Epilepsy Program to promote epilepsy self-management research and to improve the quality of life for people with epilepsy. MEW Network membership comprises four collaborating centers (Emory University, University of Texas Health Science Center at Houston, University of Michigan, and University of Washington), representatives from CDC, affiliate members, and community stakeholders. This article describes the MEW Network's background, mission statement, research agenda, and structure. Exploratory and intervention studies conducted by individual collaborating centers are described, as are Network collaborative projects, including a multisite depression prevention intervention and the development of a standard measure of epilepsy self-management. Communication strategies and examples of research translation programs are discussed. The conclusion outlines the Network's role in the future development and dissemination of evidence-based epilepsy self-management programs. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. Multi-Institution Research Centers: Planning and Management Challenges

    Science.gov (United States)

    Spooner, Catherine; Lavey, Lisa; Mukuka, Chilandu; Eames-Brown, Rosslyn

    2016-01-01

    Funding multi-institution centers of research excellence (CREs) has become a common means of supporting collaborative partnerships to address specific research topics. However, there is little guidance for those planning or managing a multi-institution CRE, which faces specific challenges not faced by single-institution research centers. We…

  13. An efficient implementation of a backpropagation learning algorithm on quadrics parallel supercomputer

    International Nuclear Information System (INIS)

    Taraglio, S.; Massaioli, F.

    1995-08-01

    A parallel implementation of a library to build and train Multi Layer Perceptrons via the Back Propagation algorithm is presented. The target machine is the SIMD massively parallel supercomputer Quadrics. Performance measures are provided on three different machines with different number of processors, for two network examples. A sample source code is given

  14. Transportation Research & Analysis Computing Center

    Data.gov (United States)

    Federal Laboratory Consortium — The technical objectives of the TRACC project included the establishment of a high performance computing center for use by USDOT research teams, including those from...

  15. Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center

    International Nuclear Information System (INIS)

    Bancroft, G.; Plessel, T.; Merritt, F.; Watson, V.

    1989-01-01

    Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers. 7 refs

  16. Louisiana Transportation Research Center : Annual report, 2016-2017

    Science.gov (United States)

    2017-10-11

    This publication is a report of the transportation research, technology transfer, education, and training activities of the Louisiana Transportation Research Center for July 1, 2016 - June 30, 2017. The center is sponsored jointly by the Louisiana De...

  17. COMPUTATIONAL SCIENCE CENTER

    International Nuclear Information System (INIS)

    DAVENPORT, J.

    2006-01-01

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together

  18. CCR Interns | Center for Cancer Research

    Science.gov (United States)

    The Cancer Research Interns (CRI) Summer Program was inaugurated in 2004 to provide an open door for students looking for an initial training opportunity. The goal is to enhance diversity within the CCR (Center for Cancer Research) training program and we have placed 338 students from 2004 to 2017, in labs and branches across the division.  The CCR and the Center for Cancer Training’s Office of Training and Education provide stipend support, some Service & Supply funds, and travel support for those students who meet the financial eligibility criteria (

  19. Integration of PanDA workload management system with Titan supercomputer at OLCF

    Science.gov (United States)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  20. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  1. Publisher Correction

    DEFF Research Database (Denmark)

    Bonàs-Guarch, Sílvia; Guindo-Martínez, Marta; Miguel-Escalada, Irene

    2018-01-01

    In the originally published version of this Article, the affiliation details for Santi González, Jian'an Luan and Claudia Langenberg were inadvertently omitted. Santi González should have been affiliated with 'Barcelona Supercomputing Center (BSC), Joint BSC-CRG-IRB Research Program in Computatio......In the originally published version of this Article, the affiliation details for Santi González, Jian'an Luan and Claudia Langenberg were inadvertently omitted. Santi González should have been affiliated with 'Barcelona Supercomputing Center (BSC), Joint BSC-CRG-IRB Research Program...

  2. Supercomputers and the mathematical modeling of high complexity problems

    International Nuclear Information System (INIS)

    Belotserkovskii, Oleg M

    2010-01-01

    This paper is a review of many works carried out by members of our scientific school in past years. The general principles of constructing numerical algorithms for high-performance computers are described. Several techniques are highlighted and these are based on the method of splitting with respect to physical processes and are widely used in computing nonlinear multidimensional processes in fluid dynamics, in studies of turbulence and hydrodynamic instabilities and in medicine and other natural sciences. The advances and developments related to the new generation of high-performance supercomputing in Russia are presented.

  3. Illinois Accelerator Research Center

    Science.gov (United States)

    Kroc, Thomas K.; Cooper, Charlie A.

    The Illinois Accelerator Research Center (IARC) hosts a new accelerator development program at Fermi National Accelerator Laboratory. IARC provides access to Fermi's state-of-the-art facilities and technologies for research, development and industrialization of particle accelerator technology. In addition to facilitating access to available existing Fermi infrastructure, the IARC Campus has a dedicated 36,000 ft2 Heavy Assembly Building (HAB) with all the infrastructure needed to develop, commission and operate new accelerators. Connected to the HAB is a 47,000 ft2 Office, Technology and Engineering (OTE) building, paid for by the state, that has office, meeting, and light technical space. The OTE building, which contains the Accelerator Physics Center, and nearby Accelerator and Technical divisions provide IARC collaborators with unique access to world class expertise in a wide array of accelerator technologies. At IARC scientists and engineers from Fermilab and academia work side by side with industrial partners to develop breakthroughs in accelerator science and translate them into applications for the nation's health, wealth and security.

  4. Using the LANSCE irradiation facility to predict the number of fatal soft errors in one of the world's fastest supercomputers

    International Nuclear Information System (INIS)

    Michalak, S.E.; Harris, K.W.; Hengartner, N.W.; Takala, B.E.; Wender, S.A.

    2005-01-01

    Los Alamos National Laboratory (LANL) is home to the Los Alamos Neutron Science Center (LANSCE). LANSCE is a unique facility because its neutron spectrum closely mimics the neutron spectrum at terrestrial and aircraft altitudes, but is many times more intense. Thus, LANSCE provides an ideal setting for accelerated testing of semiconductor and other devices that are susceptible to cosmic ray induced neutrons. Many industrial companies use LANSCE to estimate device susceptibility to cosmic ray induced neutrons, and it has also been used to test parts from one of LANL's supercomputers, the ASC (Advanced Simulation and Computing Program) Q. This paper discusses our use of the LANSCE facility to study components in Q including a comparison with failure data from Q

  5. Research Centers: Ecstasies & Agonies [in HRD].

    Science.gov (United States)

    1995

    These four papers are from a symposium facilitated by Gene Roth on research centers at the 1995 Academy of Human Resource Development (HRD) conference. "Research: The Thin Blue Line between Rigor and Reality" (Michael Leimbach) discusses the need for HRD research to increase its speed and rigor and help organizations focus on capability…

  6. Heat dissipation computations of a HVDC ground electrode using a supercomputer

    International Nuclear Information System (INIS)

    Greiss, H.; Mukhedkar, D.; Lagace, P.J.

    1990-01-01

    This paper reports on the temperature, of soil surrounding a High Voltage Direct Current (HVDC) toroidal ground electrode of practical dimensions, in both homogeneous and non-homogeneous soils that was computed at incremental points in time using finite difference methods on a supercomputer. Curves of the response were computed and plotted at several locations within the soil in the vicinity of the ground electrode for various values of the soil parameters

  7. Argonne National Lab deploys Force10 networks' massively dense ethernet switch for supercomputing cluster

    CERN Multimedia

    2003-01-01

    "Force10 Networks, Inc. today announced that Argonne National Laboratory (Argonne, IL) has successfully deployed Force10 E-Series switch/routers to connect to the TeraGrid, the world's largest supercomputing grid, sponsored by the National Science Foundation (NSF)" (1/2 page).

  8. Information on the Karlsruhe Nuclear Research Center

    International Nuclear Information System (INIS)

    Reuter, H.H.

    1980-01-01

    A short overview is given about the origins of Karlsruhe Nuclear Research Center. The historical development of the different companies operating the Center is shown. Because the original task assigned to the Center was the construction and testing of the first German reactor exclusively built by German companies, a detailed description of this reactor and the changes made afterwards is presented. Next, today's organizational structure of the Center is outlined and the development of the Center's financing since its foundation is shown. A short overview about the structure of employees from the Center's beginning up to now is also included as well as a short description of today's main activities. (orig.)

  9. A supercomputing application for reactors core design and optimization

    International Nuclear Information System (INIS)

    Hourcade, Edouard; Gaudier, Fabrice; Arnaud, Gilles; Funtowiez, David; Ammar, Karim

    2010-01-01

    Advanced nuclear reactor designs are often intuition-driven processes where designers first develop or use simplified simulation tools for each physical phenomenon involved. Through the project development, complexity in each discipline increases and implementation of chaining/coupling capabilities adapted to supercomputing optimization process are often postponed to a further step so that task gets increasingly challenging. In the context of renewal in reactor designs, project of first realization are often run in parallel with advanced design although very dependant on final options. As a consequence, the development of tools to globally assess/optimize reactor core features, with the on-going design methods accuracy, is needed. This should be possible within reasonable simulation time and without advanced computer skills needed at project management scale. Also, these tools should be ready to easily cope with modeling progresses in each discipline through project life-time. An early stage development of multi-physics package adapted to supercomputing is presented. The URANIE platform, developed at CEA and based on the Data Analysis Framework ROOT, is very well adapted to this approach. It allows diversified sampling techniques (SRS, LHS, qMC), fitting tools (neuronal networks...) and optimization techniques (genetic algorithm). Also data-base management and visualization are made very easy. In this paper, we'll present the various implementing steps of this core physics tool where neutronics, thermo-hydraulics, and fuel mechanics codes are run simultaneously. A relevant example of optimization of nuclear reactor safety characteristics will be presented. Also, flexibility of URANIE tool will be illustrated with the presentation of several approaches to improve Pareto front quality. (author)

  10. Hydrologic Modeling at the National Water Center: Operational Implementation of the WRF-Hydro Model to support National Weather Service Hydrology

    Science.gov (United States)

    Cosgrove, B.; Gochis, D.; Clark, E. P.; Cui, Z.; Dugger, A. L.; Fall, G. M.; Feng, X.; Fresch, M. A.; Gourley, J. J.; Khan, S.; Kitzmiller, D.; Lee, H. S.; Liu, Y.; McCreight, J. L.; Newman, A. J.; Oubeidillah, A.; Pan, L.; Pham, C.; Salas, F.; Sampson, K. M.; Smith, M.; Sood, G.; Wood, A.; Yates, D. N.; Yu, W.; Zhang, Y.

    2015-12-01

    The National Weather Service (NWS) National Water Center(NWC) is collaborating with the NWS National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR) to implement a first-of-its-kind operational instance of the Weather Research and Forecasting (WRF)-Hydro model over the Continental United States (CONUS) and contributing drainage areas on the NWS Weather and Climate Operational Supercomputing System (WCOSS) supercomputer. The system will provide seamless, high-resolution, continuously cycling forecasts of streamflow and other hydrologic outputs of value from both deterministic- and ensemble-type runs. WRF-Hydro will form the core of the NWC national water modeling strategy, supporting NWS hydrologic forecast operations along with emergency response and water management efforts of partner agencies. Input and output from the system will be comprehensively verified via the NWC Water Resource Evaluation Service. Hydrologic events occur on a wide range of temporal scales, from fast acting flash floods, to long-term flow events impacting water supply. In order to capture this range of events, the initial operational WRF-Hydro configuration will feature 1) hourly analysis runs, 2) short-and medium-range deterministic forecasts out to two day and ten day horizons and 3) long-range ensemble forecasts out to 30 days. All three of these configurations are underpinned by a 1km execution of the NoahMP land surface model, with channel routing taking place on 2.67 million NHDPlusV2 catchments covering the CONUS and contributing areas. Additionally, the short- and medium-range forecasts runs will feature surface and sub-surface routing on a 250m grid, while the hourly analyses will feature this same 250m routing in addition to nudging-based assimilation of US Geological Survey (USGS) streamflow observations. A limited number of major reservoirs will be configured within the model to begin to represent the first-order impacts of

  11. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  12. Strategic research field no.4, industrial innovations

    International Nuclear Information System (INIS)

    Kato, Chisachi

    2011-01-01

    'Kei'-supercomputer is planned to start its full-scale operation in about one year and a half. With this, High Performance Computing (HPC) is most likely to contribute not only to further progress in basic and applied sciences, but also to bringing about innovations in various fields of industries. It is expected to substantially shorten design time, drastically improve performance and/or liability of various industrial products, and greatly enhance safety of large-scale power plants. In this particle, six research themes, which are currently being prepared in this strategic research field, 'industrial innovations' so as to use 'Kei'-supercomputer as soon as it starts operations, will be briefly described regarding their specific goals and break-through that they are expected to bring about in industries. It is also explained how we have determined these themes. We are also planning several measures in order to promote widespread use of HPC including 'Kei'-supercomputer in industries, which will also be elaborated in this article. (author)

  13. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    Science.gov (United States)

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  14. Final priorities; National Institute on Disability and Rehabilitation Research--Disability and Rehabilitation Research Projects and Centers Program--Rehabilitation Engineering Research Centers. Final priorities.

    Science.gov (United States)

    2013-06-11

    The Assistant Secretary for Special Education and Rehabilitative Services announces priorities under the Disability and Rehabilitation Research Projects and Centers Program administered by the National Institute on Disability and Rehabilitation Research (NIDRR). Specifically, we announce priorities for a Rehabilitation Engineering Research Center (RERC) on Rehabilitation Strategies, Techniques, and Interventions (Priority 1), Information and Communication Technologies Access (Priority 2), Individual Mobility and Manipulation (Priority 3), and Physical Access and Transportation (Priority 4). The Assistant Secretary may use one or more of these priorities for competitions in fiscal year (FY) 2013 and later years. We take this action to focus research attention on areas of national need. We intend these priorities to improve community living and participation, health and function, and employment outcomes of individuals with disabilities.

  15. DOE - BES Nanoscale Science Research Centers (NSRCs)

    Energy Technology Data Exchange (ETDEWEB)

    Beecher, Cathy Jo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-11-14

    These are slides from a powerpoint shown to guests during tours of Center for Integrated Nanotechnologies (CINT) at Los Alamos National Laboratory. It shows the five DOE-BES nanoscale science research centers (NSRCs), which are located at different national laboratories throughout the country. Then it goes into detail specifically about the Center for Integrated Nanotechnologies at LANL, including statistics on its user community and CINT's New Mexico industrial users.

  16. Sandia`s network for Supercomputing `94: Linking the Los Alamos, Lawrence Livermore, and Sandia National Laboratories using switched multimegabit data service

    Energy Technology Data Exchange (ETDEWEB)

    Vahle, M.O.; Gossage, S.A.; Brenkosh, J.P. [Sandia National Labs., Albuquerque, NM (United States). Advanced Networking Integration Dept.

    1995-01-01

    Supercomputing `94, a high-performance computing and communications conference, was held November 14th through 18th, 1994 in Washington DC. For the past four years, Sandia National Laboratories has used this conference to showcase and focus its communications and networking endeavors. At the 1994 conference, Sandia built a Switched Multimegabit Data Service (SMDS) network running at 44.736 megabits per second linking its private SMDS network between its facilities in Albuquerque, New Mexico and Livermore, California to the convention center in Washington, D.C. For the show, the network was also extended from Sandia, New Mexico to Los Alamos National Laboratory and from Sandia, California to Lawrence Livermore National Laboratory. This paper documents and describes this network and how it was used at the conference.

  17. Pursuing Personal Passion: Learner-Centered Research Mentoring.

    Science.gov (United States)

    Phillips, William R

    2018-01-01

    New researchers often face difficulty finding and focusing research questions. I describe a new tool for research mentoring, the Pursuing Personal Passion (P3) interview, and a systematic approach to help learners organize their curiosity and develop researchable questions aligned with their personal and professional priorities. The learner-centered P3 research interview parallels the patient-centered clinical interview. This paper reviews experience with 27 research mentees over the years 2009 to 2016, using the P3 approach to identify their initial research topics, classify their underlying passions and track the evolution into their final research questions. These researchers usually identified one of three personal passions that provided lenses to focus their research: problem, person, or process. Initial research topics focused on: problem (24%, 6), person (48%, 12) and process (28%, 7). Final research questions evolved into: problem (20%, 5), person (32%, 8) and process (48%, 12). Identification of the underlying passion can lead researchers who start with one general topic to develop it into very different research questions. Using this P3 approach, mentors can help new researchers focus their interests into researchable questions, successful studies, and organized programs of scholarship.

  18. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    Science.gov (United States)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  19. Energy Frontier Research Center Materials Science of Actinides (A 'Life at the Frontiers of Energy Research' contest entry from the 2011 Energy Frontier Research Centers (EFRCs) Summit and Forum)

    International Nuclear Information System (INIS)

    Burns, Peter

    2011-01-01

    'Energy Frontier Research Center Materials Science of Actinides' was submitted by the EFRC for Materials Science of Actinides (MSA) to the 'Life at the Frontiers of Energy Research' video contest at the 2011 Science for Our Nation's Energy Future: Energy Frontier Research Centers (EFRCs) Summit and Forum. Twenty-six EFRCs created short videos to highlight their mission and their work. MSA is directed by Peter Burns at the University of Notre Dame, and is a partnership of scientists from ten institutions.The Office of Basic Energy Sciences in the U.S. Department of Energy's Office of Science established the 46 Energy Frontier Research Centers (EFRCs) in 2009. These collaboratively-organized centers conduct fundamental research focused on 'grand challenges' and use-inspired 'basic research needs' recently identified in major strategic planning efforts by the scientific community. The overall purpose is to accelerate scientific progress toward meeting the nation's critical energy challenges.

  20. Accelerator Center for Energy Research (ACER)

    Data.gov (United States)

    Federal Laboratory Consortium — The Accelerator Center for Energy Research (ACER) exploits radiation chemistry techniques to study chemical reactions (and other phenomena) by subjecting samples to...

  1. Senior Computational Scientist | Center for Cancer Research

    Science.gov (United States)

    The Basic Science Program (BSP) pursues independent, multidisciplinary research in basic and applied molecular biology, immunology, retrovirology, cancer biology, and human genetics. Research efforts and support are an integral part of the Center for Cancer Research (CCR) at the Frederick National Laboratory for Cancer Research (FNLCR). The Cancer & Inflammation Program (CIP),

  2. Karlsruhe nuclear research center. Main activities

    International Nuclear Information System (INIS)

    The article reports on problems of securing the fuel supply for nuclear power generation, on reprocessing and ultimate storage of radioactive material, on the safety of nuclear facilities, on new technologies and basic research, and on the infrastructure of the Karlsruhe nuclear research center, as well as finance and administration. (HK) [de

  3. The Role of Computers in Research and Development at Langley Research Center

    Science.gov (United States)

    Wieseman, Carol D. (Compiler)

    1994-01-01

    This document is a compilation of presentations given at a workshop on the role cf computers in research and development at the Langley Research Center. The objectives of the workshop were to inform the Langley Research Center community of the current software systems and software practices in use at Langley. The workshop was organized in 10 sessions: Software Engineering; Software Engineering Standards, methods, and CASE tools; Solutions of Equations; Automatic Differentiation; Mosaic and the World Wide Web; Graphics and Image Processing; System Design Integration; CAE Tools; Languages; and Advanced Topics.

  4. Study on the climate system and mass transport by a climate model

    International Nuclear Information System (INIS)

    Numaguti, A.; Sugata, S.; Takahashi, M.; Nakajima, T.; Sumi, A.

    1997-01-01

    The Center for Global Environmental Research (CGER), an organ of the National Institute for Environmental Studies of the Environment Agency of Japan, was established in October 1990 to contribute broadly to the scientific understanding of global change, and to the elucidation of and solution for our pressing environmental problems. CGER conducts environmental research from interdisciplinary, multiagency, and international perspective, provides research support facilities such as a supercomputer and databases, and offers its own data from long-term monitoring of the global environment. In March 1992, CGER installed a supercomputer system (NEC SX-3, Model 14) to facilitate research on global change. The system is open to environmental researchers worldwide. Proposed research programs are evaluated by the Supercomputer Steering Committee which consists of leading scientists in climate modeling, atmospheric chemistry, oceanic circulation, and computer science. After project approval, authorization for system usage is provided. In 1995 and 1996, several research proposals were designated as priority research and allocated larger shares of computer resources. The CGER supercomputer monograph report Vol. 3 is a report of priority research of CGER's supercomputer. The report covers the description of CCSR-NIES atmospheric general circulation model, lagragian general circulation based on the time-scale of particle motion, and ability of the CCSR-NIES atmospheric general circulation model in the stratosphere. The results obtained from these three studies are described in three chapters. We hope this report provides you with useful information on the global environmental research conducted on our supercomputer

  5. Coherent 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an Optimal Supercomputer Optical Switch Fabric

    DEFF Research Database (Denmark)

    Karinou, Fotini; Borkowski, Robert; Zibar, Darko

    2013-01-01

    We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates.......We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates....

  6. Johns Hopkins Particulate Matter Research Center

    Data.gov (United States)

    Federal Laboratory Consortium — The Johns Hopkins Particulate Matter Research Center will map health risks of PM across the US based on analyses of national databases on air pollution, mortality,...

  7. COMPUTATIONAL SCIENCE CENTER

    Energy Technology Data Exchange (ETDEWEB)

    DAVENPORT, J.

    2006-11-01

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to

  8. The Erasmus Computing Grid - Building a Super-Computer Virtually for Free at the Erasmus Medical Center and the Hogeschool Rotterdam

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop

  9. Center for Drug Evaluation and Research

    Data.gov (United States)

    Federal Laboratory Consortium — The Center for Drug Evaluation and Research(CDER) performs an essential public health task by making sure that safe and effective drugs are available to improve the...

  10. The Center for Aerospace Research: A NASA Center of Excellence at North Carolina Agricultural and Technical State University

    Science.gov (United States)

    Lai, Steven H.-Y.

    1992-01-01

    This report documents the efforts and outcomes of our research and educational programs at NASA-CORE in NCA&TSU. The goal of the center was to establish a quality aerospace research base and to develop an educational program to increase the participation of minority faculty and students in the areas of aerospace engineering. The major accomplishments of this center in the first year are summarized in terms of three different areas, namely, the center's research programs area, the center's educational programs area, and the center's management area. In the center's research programs area, we focus on developing capabilities needed to support the development of the aerospace plane and high speed civil transportation system technologies. In the educational programs area, we developed an aerospace engineering option program ready for university approval.

  11. Bolivia. The new nuclear research center in El Alto

    International Nuclear Information System (INIS)

    Nogarin, Mauro

    2016-01-01

    Research reactors in Latin America have become a priority in public policy in the last decade. Bolivia wants to become the 8th country to implement peaceful nuclear technology in this area with the new Center for Research and Development in the Nuclear Technology. The Center will be the most advanced in Latin America. It will provide for a wide use of radiation technologies in agriculture, medicine, and industry. After several negotiations Bolivia and the Russian Federation signed the Intergovernmental Agreement on cooperation in the peaceful use of atomic energy and the construction of the Nuclear Research and Technology Center.

  12. Bolivia. The new nuclear research center in El Alto

    Energy Technology Data Exchange (ETDEWEB)

    Nogarin, Mauro

    2016-05-15

    Research reactors in Latin America have become a priority in public policy in the last decade. Bolivia wants to become the 8th country to implement peaceful nuclear technology in this area with the new Center for Research and Development in the Nuclear Technology. The Center will be the most advanced in Latin America. It will provide for a wide use of radiation technologies in agriculture, medicine, and industry. After several negotiations Bolivia and the Russian Federation signed the Intergovernmental Agreement on cooperation in the peaceful use of atomic energy and the construction of the Nuclear Research and Technology Center.

  13. CCR Magazines | Center for Cancer Research

    Science.gov (United States)

    The Center for Cancer Research (CCR) has two magazines, MILESTONES and LANDMARKS, that highlight our annual advances and top contributions to the understanding, detection, treatment and prevention of cancer over the years.

  14. THE CENTER FOR MILITARY BIOMECHANICS RESEARCH

    Data.gov (United States)

    Federal Laboratory Consortium — The Center for Military Biomechanics Research is a 7,500 ft2 dedicated laboratory outfitted with state-of-the-art equipment for 3-D analysis of movement, measurement...

  15. Lewis Research Center R and D Facilities

    Science.gov (United States)

    1991-01-01

    The NASA Lewis Research Center (LeRC) defines and develops advanced technology for high priority national needs. The work of the Center is directed toward new propulsion, power, and communications technologies for application to aeronautics and space, so that U.S. leadership in these areas is ensured. The end product is knowledge, usually in a report, that is made fully available to potential users--the aircraft engine industry, the energy industry, the automotive industry, the space industry, and other NASA centers. In addition to offices and laboratories for almost every kind of physical research in such fields as fluid mechanics, physics, materials, fuels, combustion, thermodynamics, lubrication, heat transfer, and electronics, LeRC has a variety of engineering test cells for experiments with components such as compressors, pumps, conductors, turbines, nozzles, and controls. A number of large facilities can simulate the operating environment for a complete system: altitude chambers for aircraft engines; large supersonic wind tunnels for advanced airframes and propulsion systems; space simulation chambers for electric rockets or spacecraft; and a 420-foot-deep zero-gravity facility for microgravity experiments. Some problems are amenable to detection and solution only in the complete system and at essentially full scale. By combining basic research in pertinent disciplines and generic technologies with applied research on components and complete systems, LeRC has become one of the most productive centers in its field in the world. This brochure describes a number of the facilities that provide LeRC with its exceptional capabilities.

  16. Feynman diagrams sampling for quantum field theories on the QPACE 2 supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Rappl, Florian

    2016-08-01

    This work discusses the application of Feynman diagram sampling in quantum field theories. The method uses a computer simulation to sample the diagrammatic space obtained in a series expansion. For running large physical simulations powerful computers are obligatory, effectively splitting the thesis in two parts. The first part deals with the method of Feynman diagram sampling. Here the theoretical background of the method itself is discussed. Additionally, important statistical concepts and the theory of the strong force, quantum chromodynamics, are introduced. This sets the context of the simulations. We create and evaluate a variety of models to estimate the applicability of diagrammatic methods. The method is then applied to sample the perturbative expansion of the vertex correction. In the end we obtain the value for the anomalous magnetic moment of the electron. The second part looks at the QPACE 2 supercomputer. This includes a short introduction to supercomputers in general, as well as a closer look at the architecture and the cooling system of QPACE 2. Guiding benchmarks of the InfiniBand network are presented. At the core of this part, a collection of best practices and useful programming concepts are outlined, which enables the development of efficient, yet easily portable, applications for the QPACE 2 system.

  17. 76 FR 37085 - Applications for New Awards; Rehabilitation Engineering Research Centers (RERCs)

    Science.gov (United States)

    2011-06-24

    ... DEPARTMENT OF EDUCATION Applications for New Awards; Rehabilitation Engineering Research Centers...)--Disability and Rehabilitation Research Projects and Centers Program--Rehabilitation Engineering Research... (Rehabilitation Act). Rehabilitation Engineering Research Centers Program (RERCs) The purpose of the RERC program...

  18. Research Associate | Center for Cancer Research

    Science.gov (United States)

    PROGRAM DESCRIPTION The Basic Science Program (BSP) pursues independent, multidisciplinary research in basic and applied molecular biology, immunology, retrovirology, cancer biology, and human genetics. Research efforts and support are an integral part of the Center for Cancer Research (CCR) at the Frederick National Laboratory for Cancer Research (FNLCR). KEY ROLES/RESPONSIBILITIES - Research Associate III Dr. Zbigniew Dauter is the head investigator of the Synchrotron Radiation Research Section (SRRS) of CCR’s Macromolecular Crystallography Laboratory. The Synchrotron Radiation Research Section is located at Argonne National Laboratory, Argonne, Illinois; this is the site of the largest U.S. synchrotron facility. The SRRS uses X-ray diffraction technique to solve crystal structures of various proteins and nucleic acids of biological and medical relevance. The section is also specializing in analyzing crystal structures at extremely high resolution and accuracy and in developing methods of effective diffraction data collection and in using weak anomalous dispersion effects to solve structures of macromolecules. The areas of expertise are: Structural and molecular biology Macromolecular crystallography Diffraction data collection Dr. Dauter requires research support in these areas, and the individual will engage in the purification and preparation of samples, crystallize proteins using various techniques, and derivatize them with heavy atoms/anomalous scatterers, and establish conditions for cryogenic freezing. Individual will also participate in diffraction data collection at the Advanced Photon Source. In addition, the candidate will perform spectroscopic and chromatographic analyses of protein and nucleic acid samples in the context of their purity, oligomeric state and photophysical properties.

  19. Interdisciplinary research center devoted to molecular environmental science opens

    Science.gov (United States)

    Vaughan, David J.

    In October, a new research center opened at the University of Manchester in the United Kingdom. The center is the product of over a decade of ground-breaking interdisciplinary research in the Earth and related biological and chemical sciences at the university The center also responds to the British governments policy of investing in research infrastructure at key universities.The Williamson Research Centre, the first of its kind in Britain and among the first worldwide, is devoted to the emerging field of molecular environmental science. This field also aims to bring about a revolution in understanding of our environment. Though it may be a less violent revolution than some, perhaps, its potential is high for developments that could affect us all.

  20. Karlsruhe Nuclear Research Center. Progress report on research and development work in 1987

    International Nuclear Information System (INIS)

    1988-01-01

    This summary of R and D work is the scientific annual report to be prepared by the research center in compliance with its statutes. The material is arranged by items of main activities, as given in the overall R and D programme set up for the research center. The various reports prepared by the individual institutes and principal departments are presented under their relevant subject headings. The annual report is intended to demonstrate the progress achieved in the tasks and activities assigned by the R and D programme of the research center, by referring to the purposes and goals stated in the programme, showing the joint or separate efforts and achievements of the institutes. Details and results of activities are found in the scientific-technical publications given in the bibliographical survey, and in the internal primary surveys. The main activities of the research center include the following: Fast Breeder Project (PSB), Nuclear Fusion Project (PKF), Separation Nozzle Project (TDV), and Reprocessing and Waste Treatment Project (PWA), Ultimate Disposal of Radioactive Waste (ELA), Environment and Safety (U and S), Solids and Materials (FM), Nuclear and Particle Physics (KTP), Microtechniques (MT), Materials Handling (HT), Other Research Activities (SF). Organisational aspects and institutes and the list of publications conclude the report. (orig./HK) [de

  1. Statistical Analysis of Research Data | Center for Cancer Research

    Science.gov (United States)

    Recent advances in cancer biology have resulted in the need for increased statistical analysis of research data. The Statistical Analysis of Research Data (SARD) course will be held on April 5-6, 2018 from 9 a.m.-5 p.m. at the National Institutes of Health's Natcher Conference Center, Balcony C on the Bethesda Campus. SARD is designed to provide an overview on the general principles of statistical analysis of research data.  The first day will feature univariate data analysis, including descriptive statistics, probability distributions, one- and two-sample inferential statistics.

  2. Final priority; National Institute on Disability and Rehabilitation Research--Disability and Rehabilitation Projects and Centers Program--Rehabilitation Engineering Research Centers. Final priority.

    Science.gov (United States)

    2013-06-19

    The Assistant Secretary for Special Education and Rehabilitative Services announces a priority for a Rehabilitation Engineering Research Center (RERC) on Technologies to Support Successful Aging with Disability under the Disability and Rehabilitation Research Projects and Centers Program administered by the National Institute on Disability and Rehabilitation Research (NIDRR). The Assistant Secretary may use this priority for a competition in fiscal year (FY) 2013 and later years. We take this action to focus research attention on areas of national need. We intend to use this priority to improve outcomes for individuals with disabilities.

  3. A fast random number generator for the Intel Paragon supercomputer

    Science.gov (United States)

    Gutbrod, F.

    1995-06-01

    A pseudo-random number generator is presented which makes optimal use of the architecture of the i860-microprocessor and which is expected to have a very long period. It is therefore a good candidate for use on the parallel supercomputer Paragon XP. In the assembler version, it needs 6.4 cycles for a real∗4 random number. There is a FORTRAN routine which yields identical numbers up to rare and minor rounding discrepancies, and it needs 28 cycles. The FORTRAN performance on other microprocessors is somewhat better. Arguments for the quality of the generator and some numerical tests are given.

  4. National Center on Sleep Disorders Research

    Science.gov (United States)

    ... Resources Register for Updates The National Center on Sleep Disorders Research (NCSDR) Located within the National Heart, Lung, ... 60 percent have a chronic disorder. Each year, sleep disorders, sleep deprivation, and sleepiness add an estimated $15. ...

  5. Making Research Cyberinfrastructure a Strategic Choice

    Science.gov (United States)

    Hacker, Thomas J.; Wheeler, Bradley C.

    2007-01-01

    The commoditization of low-cost hardware has enabled even modest-sized laboratories and research projects to own their own "supercomputers." The authors argue that this local solution undermines rather than amplifies the research potential of scholars. CIOs, provosts, and research technologists should consider carefully an overall…

  6. Energy Frontier Research Centers: Impact Report, January 2017

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2017-01-31

    Since its inception in 2009, the U. S. Department of Energy’s Energy Frontier Research Center (EFRC) program has become an important research modality in the Department’s portfolio, enabling high impact research that addresses key scientific challenges for energy technologies. Funded by the Office of Science’s Basic Energy Sciences program, the EFRCs are located across the United States and are led by universities, national laboratories, and private research institutions. These multi-investigator, multidisciplinary centers bring together world-class teams of researchers, often from multiple institutions, to tackle the toughest scientific challenges preventing advances in energy technologies. The EFRCs’ fundamental scientific advances are having a significant impact that is being translated to industry. In 2009 five-year awards were made to 46 EFRCs, including 16 that were fully funded by the American Recovery and Reinvestment Act (ARRA). An open recompetition of the program in 2014 resulted in fouryear awards to 32 centers, 22 of which are renewals of existing EFRCs and 10 of which are new EFRCs. In 2016, DOE added four new centers to accelerate the scientific breakthroughs needed to support the Department’s environmental management and nuclear cleanup mission, bringing the total number of active EFRCs to 36. The impact reports in this document describe some of the many scientific accomplishments and greater impacts of the class of 2009 – 2018 EFRCs and early outcomes from a few of the class of 2014 – 2018 EFRCs.

  7. 48 CFR 235.017 - Federally Funded Research and Development Centers.

    Science.gov (United States)

    2010-10-01

    ... DEVELOPMENT CONTRACTING 235.017 Federally Funded Research and Development Centers. (a) Policy. (2) No DoD... Funded Research and Development Center (FFRDC) if a member of its board of directors or trustees... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Federally Funded Research...

  8. Introduction | Center for Cancer Research

    Science.gov (United States)

    Introduction In order to meet increasing demands from both NIH intramural and extramural communities for access to a small angle X-ray scattering (SAXS) resource, the Center for Cancer Research (CCR) under the leadership of Jeffrey Strathern and Bob Wiltrout established a partnership user program (PUP) with the Argonne National Laboratory Photon Source in October 2008.

  9. Interaction Modeling at PROS Research Center

    OpenAIRE

    Panach , José ,; Aquino , Nathalie; PASTOR , Oscar

    2011-01-01

    Part 1: Long and Short Papers; International audience; This paper describes how the PROS Research Center deals with interaction in the context of a model-driven approach for the development of information systems. Interaction is specified in a conceptual model together with the structure and behavior of the system. Major achievements and current research challenges of PROS in the field of interaction modeling are presented.

  10. The role of architectural research centers in addressing climate change

    Directory of Open Access Journals (Sweden)

    John Carmody

    2012-10-01

    Full Text Available ABSTRACT: It is clear that an urgent, major transformation needs to happen in the design of the built environment to respond to impending climate change and other environmental degradation. This paper will explain the potential role of architectural research centers in this transformation and provide examples from the Center for Sustainable Building Research (CSBR at the University of Minnesota. A research center can become a regional hub to coordinate and disseminate critical information. CSBR is leading the establishment of Architecture 2030 standards in Minnesota, assisting local governments in writing green building policy, providing design assistance to local government, developing tools to assist design decision making, providing technical assistance to the affordable housing community inMinnesota, and establishing a regional case study database that includes actual performance information. CSBR is creating a publicly accessible, credible knowledge base on new approaches, technologies and actual performance outcomes. Research centers such as CSBR can be a critical component of the necessary feedback loop often lacking in the building industry. A research center can also fill major gaps in providing in depth professional education as well as be a catalyst for demonstration projects and public education.

  11. 48 CFR 970.3501 - Federally funded research and development centers.

    Science.gov (United States)

    2010-10-01

    ... Development Contracting 970.3501 Federally funded research and development centers. ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Federally funded research and development centers. 970.3501 Section 970.3501 Federal Acquisition Regulations System DEPARTMENT...

  12. U.S. Environmental Protection Agency national network of research centers: A case study in socio-political influences on research

    Energy Technology Data Exchange (ETDEWEB)

    Morehouse, K. [Environmental Protection Agency, Washington, DC (United States)

    1995-12-01

    During the 15 years that the U.S. Environmental Protection Agency (EPA) has supported university-based research centers, there have been many changes in mission, operating style, funding level, eligibility, and selection process. Even the definition of the term {open_quotes}research center{close_quotes} is open to debate. Shifting national priorities, political realities, and funding uncertainties have powered the evolution of research centers in EPA, although the agency`s basic philosophy on the purpose and value of this approach to research remains essentially unchanged. Today, EPA manages 28 centers, through the Office of Exploratory Research. These centers are administered under three distinct programs. Each program has its own mission and goals which guide the way individual centers are selected and operated. This paper will describe: (1) EPA`s philosophy of reserach centers, (2) the complicated history of EPA research centers, (3) coordination and interaction among EPA centers and others, (4) opportunities for collaboration, and (5) plans for the future.

  13. Public relations activities of the Karlsruhe Nuclear Research Center - a national research center contributes to opinion forming

    International Nuclear Information System (INIS)

    Koerting, K.

    1988-01-01

    At the Karlsruhe Nuclear Research Center, the Public Relations Department directly reports to the Chief Executive Officer. The head of the Public Relation Department acts as spokesman of the center in the public, which requires him to be fully informed of the work of all units and of the policy goals of the executive board. The key tools used by the Public Relations Department are KfK-Hausmitteilungen, accident information, the scientific journal KfK-Nachrichten, press releases, exhibitions, fairs, guided tours, and nuclear energy information staff. (DG)

  14. Data Curation Education in Research Centers (DCERC)

    Science.gov (United States)

    Marlino, M. R.; Mayernik, M. S.; Kelly, K.; Allard, S.; Tenopir, C.; Palmer, C.; Varvel, V. E., Jr.

    2012-12-01

    Digital data both enable and constrain scientific research. Scientists are enabled by digital data to develop new research methods, utilize new data sources, and investigate new topics, but they also face new data collection, management, and preservation burdens. The current data workforce consists primarily of scientists who receive little formal training in data management and data managers who are typically educated through on-the-job training. The Data Curation Education in Research Centers (DCERC) program is investigating a new model for educating data professionals to contribute to scientific research. DCERC is a collaboration between the University of Illinois at Urbana-Champaign Graduate School of Library and Information Science, the University of Tennessee School of Information Sciences, and the National Center for Atmospheric Research. The program is organized around a foundations course in data curation and provides field experiences in research and data centers for both master's and doctoral students. This presentation will outline the aims and the structure of the DCERC program and discuss results and lessons learned from the first set of summer internships in 2012. Four masters students participated and worked with both data mentors and science mentors, gaining first hand experiences in the issues, methods, and challenges of scientific data curation. They engaged in a diverse set of topics, including climate model metadata, observational data management workflows, and data cleaning, documentation, and ingest processes within a data archive. The students learned current data management practices and challenges while developing expertise and conducting research. They also made important contributions to NCAR data and science teams by evaluating data management workflows and processes, preparing data sets to be archived, and developing recommendations for particular data management activities. The master's student interns will return in summer of 2013

  15. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  16. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-03-29

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  17. Managing a Modern University Research Center.

    Science.gov (United States)

    Veres, John G., III

    1988-01-01

    The university research center of the future will function best to serve the rapidly changing public and private demand for services with a highly trained core staff, adequately funded and equipped, whose morale and quality of work performance is a prime consideration. (MSE)

  18. Genomics:GTL Bioenergy Research Centers White Paper

    Energy Technology Data Exchange (ETDEWEB)

    Mansfield, Betty Kay [ORNL; Alton, Anita Jean [ORNL; Andrews, Shirley H [ORNL; Bownas, Jennifer Lynn [ORNL; Casey, Denise [ORNL; Martin, Sheryl A [ORNL; Mills, Marissa [ORNL; Nylander, Kim [ORNL; Wyrick, Judy M [ORNL; Drell, Dr. Daniel [Office of Science, Department of Energy; Weatherwax, Sharlene [U.S. Department of Energy; Carruthers, Julie [U.S. Department of Energy

    2006-08-01

    In his Advanced Energy Initiative announced in January 2006, President George W. Bush committed the nation to new efforts to develop alternative sources of energy to replace imported oil and fossil fuels. Developing cost-effective and energy-efficient methods of producing renewable alternative fuels such as cellulosic ethanol from biomass and solar-derived biofuels will require transformational breakthroughs in science and technology. Incremental improvements in current bioenergy production methods will not suffice. The Genomics:GTL Bioenergy Research Centers will be dedicated to fundamental research on microbe and plant systems with the goal of developing knowledge that will advance biotechnology-based strategies for biofuels production. The aim is to spur substantial progress toward cost-effective production of biologically based renewable energy sources. This document describes the rationale for the establishment of the centers and their objectives in light of the U.S. Department of Energy's mission and goals. Developing energy-efficient and cost-effective methods of producing alternative fuels such as cellulosic ethanol from biomass will require transformational breakthroughs in science and technology. Incremental improvements in current bioenergy-production methods will not suffice. The focus on microbes (for cellular mechanisms) and plants (for source biomass) fundamentally exploits capabilities well known to exist in the microbial world. Thus 'proof of concept' is not required, but considerable basic research into these capabilities remains an urgent priority. Several developments have converged in recent years to suggest that systems biology research into microbes and plants promises solutions that will overcome critical roadblocks on the path to cost-effective, large-scale production of cellulosic ethanol and other renewable energy from biomass. The ability to rapidly sequence the DNA of any organism is a critical part of these new

  19. Building Technologies Research and Integration Center (BTRIC)

    Data.gov (United States)

    Federal Laboratory Consortium — The Building Technologies Research and Integration Center (BTRIC), in the Energy and Transportation Science Division (ETSD) of Oak Ridge National Laboratory (ORNL),...

  20. A research on the enhancement of research management efficiency for the division of research, Korea cancer center hospital

    International Nuclear Information System (INIS)

    Lee, S. W.; Ma, K. H.; Kim, J. R.; Lee, D. C.; Lee, J. H.

    1999-06-01

    The research activities of Korea Cancer Center Hospital have increased for the past a few years just in proportion to the increase of research budget, but the assisting manpower of the office of research management has never been increased and the indications are that the internal and external circumstances will not allow the recruitment for a fairly long time. It has, therefore, become inevitable to enhance the work efficiency of the office by analyzing the administrative research assistance system, finding out problems and inefficiency factors, and suggesting possible answers to them. The office of research management and international cooperation has conducted this research to suggest possible ways to facilitate the administrative support for the research activities of Korea Cancer Center Hospital By analyzing the change of research budget, organization of the division of research and administrative support, manpower, and the administrative research supporting system of other institutes, we suggested possible ways to enhance the work efficiency for administrative research support and developed a relative database program. The research report will serve as a data for the organization of research support division when the Radiation Medicine Research Center is established. The database program has already been used for research budget management

  1. Recruiting community health centers into pragmatic research: Findings from STOP CRC.

    Science.gov (United States)

    Coronado, Gloria D; Retecki, Sally; Schneider, Jennifer; Taplin, Stephen H; Burdick, Tim; Green, Beverly B

    2016-04-01

    Challenges of recruiting participants into pragmatic trials, particularly at the level of the health system, remain largely unexplored. As part of Strategies and Opportunities to STOP Colon Cancer in Priority Populations (STOP CRC), we recruited eight separate community health centers (consisting of 26 individual safety net clinics) into a large comparative effectiveness pragmatic study to evaluate methods of raising the rates of colorectal cancer screening. In partnership with STOP CRC's advisory board, we defined criteria to identify eligible health centers and applied these criteria to a list of health centers in Washington, Oregon, and California affiliated with Oregon Community Health Information Network, a 16-state practice-based research network of federally sponsored health centers. Project staff contacted centers that met eligibility criteria and arranged in-person meetings of key study investigators with health center leadership teams. We used the Consolidated Framework for Implementation Research to thematically analyze the content of discussions during these meetings to identify major facilitators of and barriers to health center participation. From an initial list of 41 health centers, 11 met the initial inclusion criteria. Of these, leaders at three centers declined and at eight centers (26 clinic sites) agreed to participate (73%). Participating and nonparticipating health centers were similar with respect to clinic size, percent Hispanic patients, and percent uninsured patients. Participating health centers had higher proportions of Medicaid patients and higher baseline colorectal cancer screening rates. Common facilitators of participation were perception by center leadership that the project was an opportunity to increase colorectal cancer screening rates and to use electronic health record tools for population management. Barriers to participation were concerns of center leaders about ability to provide fecal testing to and assure follow-up of

  2. Leveraging the national cyberinfrastructure for biomedical research.

    Science.gov (United States)

    LeDuc, Richard; Vaughn, Matthew; Fonner, John M; Sullivan, Michael; Williams, James G; Blood, Philip D; Taylor, James; Barnett, William

    2014-01-01

    In the USA, the national cyberinfrastructure refers to a system of research supercomputer and other IT facilities and the high speed networks that connect them. These resources have been heavily leveraged by scientists in disciplines such as high energy physics, astronomy, and climatology, but until recently they have been little used by biomedical researchers. We suggest that many of the 'Big Data' challenges facing the medical informatics community can be efficiently handled using national-scale cyberinfrastructure. Resources such as the Extreme Science and Discovery Environment, the Open Science Grid, and Internet2 provide economical and proven infrastructures for Big Data challenges, but these resources can be difficult to approach. Specialized web portals, support centers, and virtual organizations can be constructed on these resources to meet defined computational challenges, specifically for genomics. We provide examples of how this has been done in basic biology as an illustration for the biomedical informatics community.

  3. A research plan based on high intensity proton accelerator Neutron Science Research Center

    International Nuclear Information System (INIS)

    Mizumoto, Motoharu

    1997-01-01

    A plan called Neutron Science Research Center (NSRC) has been proposed in JAERI. The center is a complex composed of research facilities based on a proton linac with an energy of 1.5GeV and an average current of 10mA. The research facilities will consist of Thermal/Cold Neutron Facility, Neutron Irradiation Facility, Neutron Physics Facility, OMEGA/Nuclear Energy Facility, Spallation RI Beam Facility, Meson/Muon Facility and Medium Energy Experiment Facility, where high intensity proton beam and secondary particle beams such as neutron, pion, muon and unstable radio isotope (RI) beams generated from the proton beam will be utilized for innovative researches in the fields on nuclear engineering and basic sciences. (author)

  4. A research plan based on high intensity proton accelerator Neutron Science Research Center

    Energy Technology Data Exchange (ETDEWEB)

    Mizumoto, Motoharu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-03-01

    A plan called Neutron Science Research Center (NSRC) has been proposed in JAERI. The center is a complex composed of research facilities based on a proton linac with an energy of 1.5GeV and an average current of 10mA. The research facilities will consist of Thermal/Cold Neutron Facility, Neutron Irradiation Facility, Neutron Physics Facility, OMEGA/Nuclear Energy Facility, Spallation RI Beam Facility, Meson/Muon Facility and Medium Energy Experiment Facility, where high intensity proton beam and secondary particle beams such as neutron, pion, muon and unstable radio isotope (RI) beams generated from the proton beam will be utilized for innovative researches in the fields on nuclear engineering and basic sciences. (author)

  5. A Program of Research and Education in Astronautics at the NASA Langley Research Center

    Science.gov (United States)

    Tolson, Robert H.

    2000-01-01

    The objectives of the Program were to conduct research at the NASA Langley Research Center in the area of astronautics and to provide a comprehensive education program at the Center leading to advanced degrees in Astronautics. We believe that the program has successfully met the objectives and has been of significant benefit to NASA LaRC, the GWU and the nation.

  6. The DESY Research Center

    International Nuclear Information System (INIS)

    Waloschek, P.

    1988-01-01

    On November 12, 1964, the 6 GeV electrons synchrotron and the associated utility facilities were dedicated for regular operation. Since that date, the DESY Research Center, the German Electron Synchrotron in Hamburg, has offered to scientists from all over the world unique facilities in which to study the smallest constituents of matter. At present, some 580 physicists participate in DESY's research work on particle physics and high energy physics. Most of them are university teachers, a great many come from abroad. Their home institutions make considerable contributions to setting up the measuring equipment. Another 500 physicists annually make use of the extensive synchrotron radiation facilities available at DESY. DESY is one of the thirteen national research laboratories in the Federal Republic of Germany; its annual government grants for operation and personnel (1300 staff members in 1988) amount to some DM 150 million. In addition, some DM 950 million will be invested into the construction of the new HERA facility between 1984 and 1990, of which 15% will be contributed by foreign institutions. The ordinary budget of DESY is paid 90% by the German Federal Ministry for Research and Technology (BMFT) and 10% by the city of Hamburg. (orig.)

  7. High Energy Astrophysics Science Archive Research Center

    Data.gov (United States)

    National Aeronautics and Space Administration — The High Energy Astrophysics Science Archive Research Center (HEASARC) is the primary archive for NASA missions dealing with extremely energetic phenomena, from...

  8. Simulation of x-rays in refractive structure by the Monte Carlo method using the supercomputer SKIF

    International Nuclear Information System (INIS)

    Yaskevich, Yu.R.; Kravchenko, O.I.; Soroka, I.I.; Chembrovskij, A.G.; Kolesnik, A.S.; Serikova, N.V.; Petrov, P.V.; Kol'chevskij, N.N.

    2013-01-01

    Software 'Xray-SKIF' for the simulation of the X-rays in refractive structures by the Monte-Carlo method using the supercomputer SKIF BSU are developed. The program generates a large number of rays propagated from a source to the refractive structure. The ray trajectory under assumption of geometrical optics is calculated. Absorption is calculated for each ray inside of refractive structure. Dynamic arrays are used for results of calculation rays parameters, its restore the X-ray field distributions very fast at different position of detector. It was found that increasing the number of processors leads to proportional decreasing of calculation time: simulation of 10 8 X-rays using supercomputer with the number of processors from 1 to 30 run-times equal 3 hours and 6 minutes, respectively. 10 9 X-rays are calculated by software 'Xray-SKIF' which allows to reconstruct the X-ray field after refractive structure with a special resolution of 1 micron. (authors)

  9. RCOP: Research Center for Optical Physics

    Science.gov (United States)

    Tabibi, Bagher M. (Principal Investigator)

    1996-01-01

    During the five years since its inception, Research Center for Optical Physics (RCOP) has excelled in the goals stated in the original proposal: 1) training of the scientists and engineers needed for the twenty-first century with special emphasis on underrepresented citizens and 2) research and technological development in areas of relevance to NASA. In the category of research training, there have been 16 Bachelors degrees and 9 Masters degrees awarded to African American students working in RCOP during the last five years. RCOP has also provided research experience to undergraduate and high school students through a number of outreach programs held during the summer and the academic year. RCOP has also been instrumental in the development of the Ph.D. program in physics which is in its fourth year at Hampton. There are currently over 40 graduate students in the program and 9 African American graduate students, working in RCOP, that have satisfied all of the requirements for Ph.D. candidancy and are working on their dissertation research. At least three of these students will be awarded their doctoral degrees during 1997. RCOP has also excelled in research and technological development. During the first five years of existence, RCOP researchers have generated well over $3 M in research funding that directly supports the Center. Close ties with NASA Langley and NASA Lewis have been established, and collaborations with NASA scientists, URC's and other universities as well as with industry have been developed. This success is evidenced by the rate of publishing research results in refereed journals, which now exceeds that of the goals in the original proposal (approx. 2 publications per faculty per year). Also, two patents have been awarded to RCOP scientists.

  10. NASA LANGLEY RESEARCH CENTER AND THE TIDEWATER INTERAGENCY POLLUTION PREVENTION PROGRAM

    Science.gov (United States)

    National Aeronautics and Space Administration (NASA)'s Langley Research Center (LaRC) is an 807-acre research center devoted to aeronautics and space research. aRC has initiated a broad-based pollution prevention program guided by a Pollution Prevention Program Plan and implement...

  11. 34 CFR 350.34 - Which Rehabilitation Engineering Research Centers must have an advisory committee?

    Science.gov (United States)

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false Which Rehabilitation Engineering Research Centers must... Engineering Research Centers Does the Secretary Assist? § 350.34 Which Rehabilitation Engineering Research Centers must have an advisory committee? A Rehabilitation Engineering Research Center conducting research...

  12. SWOT analysis in Sina Trauma and Surgery Research Center.

    Science.gov (United States)

    Salamati, Payman; ashraf Eghbali, Ali; Zarghampour, Manijeh

    2014-01-01

    The present study was conducted with the aim of identifying and evaluating the internal and external factors, affecting the Sina Trauma and Surgery Research Center, affiliated to Tehran University of Medical Sciences and propose some of related strategies to senior managers. We used a combined quantitative and qualitative methodology. Our study population consisted of personnel (18 individuals) at Sina Trauma and Surgery Research Center. Data-collection tools were the group discussions and the questionnaires. Data were analyzed with descriptive statistics and SWOT (Strength, Weakness, Opportunities and Threats) analysis. 18 individuals participated in sessions, consisting of 8 women (44.4%) and 10 men (55.6%). The final scores were 2.45 for internal factors (strength-weakness) and 2.17 for external factors (opportunities-threats). In this study, we proposed 36 strategies (10 weakness-threat strategies, 10 weakness-opportunity strategies, 7 strength-threat strategies, and 9 strength-opportunity strategies). The current status of Sina Trauma and Surgery Research Center is threatened weak. We recommend the center to implement the proposed strategies.

  13. Building research infrastructure in community health centers: a Community Health Applied Research Network (CHARN) report.

    Science.gov (United States)

    Likumahuwa, Sonja; Song, Hui; Singal, Robbie; Weir, Rosy Chang; Crane, Heidi; Muench, John; Sim, Shao-Chee; DeVoe, Jennifer E

    2013-01-01

    This article introduces the Community Health Applied Research Network (CHARN), a practice-based research network of community health centers (CHCs). Established by the Health Resources and Services Administration in 2010, CHARN is a network of 4 community research nodes, each with multiple affiliated CHCs and an academic center. The four nodes (18 individual CHCs and 4 academic partners in 9 states) are supported by a data coordinating center. Here we provide case studies detailing how CHARN is building research infrastructure and capacity in CHCs, with a particular focus on how community practice-academic partnerships were facilitated by the CHARN structure. The examples provided by the CHARN nodes include many of the building blocks of research capacity: communication capacity and "matchmaking" between providers and researchers; technology transfer; research methods tailored to community practice settings; and community institutional review board infrastructure to enable community oversight. We draw lessons learned from these case studies that we hope will serve as examples for other networks, with special relevance for community-based networks seeking to build research infrastructure in primary care settings.

  14. Annual report of R and D activities in Center for Promotion of Computational Science and Engineering and Center for Computational Science and e-Systems from April 1, 2005 to March 31, 2006

    International Nuclear Information System (INIS)

    2007-03-01

    This report provides an overview of research and development activities in Center for Computational Science and Engineering (CCSE), JAERI in the former half of the fiscal year 2005 (April 1, 2005 - Sep. 30, 2006) and those in Center for Computational Science and e-Systems (CCSE), JAEA, in the latter half of the fiscal year 2005(Oct 1, 2005 - March 31, 2006). In the former half term, the activities have been performed by 5 research groups, Research Group for Computational Science in Atomic Energy, Research Group for Computational Material Science in Atomic Energy, R and D Group for Computer Science, R and D Group for Numerical Experiments, and Quantum Bioinformatics Group in CCSE. At the beginning of the latter half term, these 5 groups were integrated into two offices, Simulation Technology Research and Development Office and Computer Science Research and Development Office at the moment of the unification of JNC (Japan Nuclear Cycle Development Institute) and JAERI (Japan Atomic Energy Research Institute), and the latter-half term activities were operated by the two offices. A big project, ITBL (Information Technology Based Laboratory) project and fundamental computational research for atomic energy plant were performed mainly by two groups, the R and D Group for Computer Science and the Research Group for Computational Science in Atomic Energy in the former half term and their integrated office, Computer Science Research and Development Office in the latter half one, respectively. The main result was verification by using structure analysis for real plant executable on the Grid environment, and received Honorable Mentions of Analytic Challenge in the conference 'Supercomputing (SC05)'. The materials science and bioinformatics in atomic energy research field were carried out by three groups, Research Group for Computational Material Science in Atomic Energy, R and D Group for Computer Science, R and D Group for Numerical Experiments, and Quantum Bioinformatics

  15. Natural and Accelerated Bioremediation Research (NABIR) Field Research Center (FRC) Management Plan

    Energy Technology Data Exchange (ETDEWEB)

    Watson, D.B.

    2002-02-28

    The Environmental Sciences Division at Oak Ridge National Laboratory has established a Field Research Center (FRC) to support the Natural and Accelerated Bioremediation Research (NABIR) Program on the U.S. Department of Energy (DOE) Oak Ridge Reservation in Oak Ridge, Tennessee for the DOE Headquarters Office of Biological and Environmental Research within the Office of Science.

  16. On-going research projects at Ankara Nuclear research center in agriculture and animal science

    International Nuclear Information System (INIS)

    Tukenmez, I.

    2004-01-01

    Full text:The research and development activities of Ankara Nuclear Research Center in Agriculture and Animal Science(ANRCAA) are concentrated on the contribution of atomic energy to peace by the use of nuclear and related techniques in food, agriculture and animal science. Nuclear techniques are used in the above fields in two ways: in vitro or in vivo radio tracing the substances and processes of biological importance, and irradiation of biological materials for preservation and quality modification. Research projects are carried out by interdisciplinary studies with well equipped laboratories at the Center. The projects in progress conducted by the Center comprises nuclear-aided researches in soil fertility, plant nutrition, plant protection, improvement of field crops, improvement of horticultural plants and forest trees by mutation breeding, in vitro culture technique with mutagen treatments, use of phosphogypsum in soil amelioration, sterilization of medical supplies, wastewater treatment, animal nutrition, animal health and productivity and accreditation. The on-going projects with the above subjects will be summarized for possible collaborations

  17. On-going research projects at Ankara Nuclear Research Center in Agriculture and Animal Science

    International Nuclear Information System (INIS)

    Tukenmez, I.

    2004-01-01

    Full text: The research and development activities of Ankara Nuclear Research Center in Agriculture and Animal Science(ANRCAA) are concentrated on the contribution of atomic energy to peace by the use of nuclear and related techniques in food, agriculture and animal science. Nuclear techniques are used in the above fields in two ways: in vitro or in vivo radio tracing the substances and processes of biological importance, and irradiation of biological materials for preservation and quality modification. Research projects are carried out by interdisciplinary studies with well equipped laboratories at the Center. The projects in progress conducted by the Center comprises nuclear-aided researches in soil fertility, plant nutrition, plant protection, improvement of field crops, improvement of horticultural plants and forest trees by mutation breeding, in vitro culture technique with mutagen treatments, use of phosphogypsum in soil amelioration, sterilization of medical supplies, wastewater treatment, animal nutrition, animal health and productivity and accreditation. The on-going projects with the above subjects will be summarized for possible collaborations

  18. CUBED: South Dakota 2010 Research Center For Dusel Experiments

    International Nuclear Information System (INIS)

    Keller, Christina; Alton, Drew; Bai Xinhau; Durben, Dan; Heise, Jaret; Hong Haiping; Howard, Stan; Jiang Chaoyang; Keeter, Kara; McTaggart, Robert; Medlin, Dana; Mei Dongming; Petukhov, Andre; Rauber, Joel; Roggenthen, Bill; Spaans, Jason; Sun Yongchen; Szczerbinska, Barbara; Thomas, Keenan; Zehfus, Michael

    2010-01-01

    With the selection of the Homestake Mine in western South Dakota by the National Science Foundation (NSF) as the site for a national Deep Underground Science and Engineering Laboratory (DUSEL), the state of South Dakota has sought ways to engage its faculty and students in activities planned for DUSEL. One such effort is the creation of a 2010 Research Center focused on ultra-low background experiments or a Center for Ultra-low Background Experiments at DUSEL (CUBED). The goals of this center include to 1) bring together the current South Dakota faculty so that one may begin to develop a critical mass of expertise necessary for South Dakota's full participation in large-scale collaborations planned for DUSEL; 2) to increase the number of research faculty and other research personnel in South Dakota to complement and supplement existing expertise in nuclear physics and materials sciences; 3) to be competitive in pursuit of external funding through the creation of a center which focuses on areas of interest to experiments planned for DUSEL such as an underground crystal growth lab, a low background counting facility, a purification/depletion facility for noble liquids, and an electroforming copper facility underground; and 4) to train and educate graduate and undergraduate students as a way to develop the scientific workforce of the state. We will provide an update on the activities of the center and describe in more detail the scientific foci of the center.

  19. New Center Links Earth, Space, and Information Sciences

    Science.gov (United States)

    Aswathanarayana, U.

    2004-05-01

    Broad-based geoscience instruction melding the Earth, space, and information technology sciences has been identified as an effective way to take advantage of the new jobs created by technological innovations in natural resources management. Based on this paradigm, the University of Hyderabad in India is developing a Centre of Earth and Space Sciences that will be linked to the university's super-computing facility. The proposed center will provide the basic science underpinnings for the Earth, space, and information technology sciences; develop new methodologies for the utilization of natural resources such as water, soils, sediments, minerals, and biota; mitigate the adverse consequences of natural hazards; and design innovative ways of incorporating scientific information into the legislative and administrative processes. For these reasons, the ethos and the innovatively designed management structure of the center would be of particular relevance to the developing countries. India holds 17% of the world's human population, and 30% of its farm animals, but only about 2% of the planet's water resources. Water will hence constitute the core concern of the center, because ecologically sustainable, socially equitable, and economically viable management of water resources of the country holds the key to the quality of life (drinking water, sanitation, and health), food security, and industrial development of the country. The center will be focused on interdisciplinary basic and pure applied research that is relevant to the practical needs of India as a developing country. These include, for example, climate prediction, since India is heavily dependent on the monsoon system, and satellite remote sensing of soil moisture, since agriculture is still a principal source of livelihood in India. The center will perform research and development in areas such as data assimilation and validation, and identification of new sensors to be mounted on the Indian meteorological

  20. Proceedings of RIKEN BNL Research Center Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Samios, Nicholas P. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2013-01-24

    The twelfth evaluation of the RIKEN BNL Research Center (RBRC) took place on November 6 – 8, 2012 at Brookhaven National Laboratory. The members of the Scientific Review Committee (SRC), present at the meeting, were: Prof. Wit Busza, Prof. Miklos Gyulassy, Prof. Kenichi Imai, Prof. Richard Milner (Chair), Prof. Alfred Mueller, Prof. Charles Young Prescott, and Prof. Akira Ukawa. We are pleased that Dr. Hideto En’yo, the Director of the Nishina Institute of RIKEN, Japan, participated in this meeting both in informing the committee of the activities of the RIKEN Nishina Center for Accelerator- Based Science and the role of RBRC and as an observer of this review. In order to illustrate the breadth and scope of the RBRC program, each member of the Center made a presentation on his/her research efforts. This encompassed three major areas of investigation: theoretical, experimental and computational physics. In addition, the committee met privately with the fellows and postdocs to ascertain their opinions and concerns. Although the main purpose of this review is a report to RIKEN management on the health, scientific value, management and future prospects of the Center, the RBRC management felt that a compendium of the scientific presentations are of sufficient quality and interest that they warrant a wider distribution. Therefore we have made this compilation and present it to the community for its information and enlightenment.

  1. Solar Energy Research Center Instrumentation Facility

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Thomas, J.; Papanikolas, John, P.

    2011-11-11

    SOLAR ENERGY RESEARCH CENTER INSTRUMENTATION FACILITY The mission of the Solar Energy Research Center (UNC SERC) at the University of North Carolina at Chapel Hill (UNC-CH) is to establish a world leading effort in solar fuels research and to develop the materials and methods needed to fabricate the next generation of solar energy devices. We are addressing the fundamental issues that will drive new strategies for solar energy conversion and the engineering challenges that must be met in order to convert discoveries made in the laboratory into commercially available devices. The development of a photoelectrosynthesis cell (PEC) for solar fuels production faces daunting requirements: (1) Absorb a large fraction of sunlight; (2) Carry out artificial photosynthesis which involves multiple complex reaction steps; (3) Avoid competitive and deleterious side and reverse reactions; (4) Perform 13 million catalytic cycles per year with minimal degradation; (5) Use non-toxic materials; (6) Cost-effectiveness. PEC efficiency is directly determined by the kinetics of each reaction step. The UNC SERC is addressing this challenge by taking a broad interdisciplinary approach in a highly collaborative setting, drawing on expertise across a broad range of disciplines in chemistry, physics and materials science. By taking a systematic approach toward a fundamental understanding of the mechanism of each step, we will be able to gain unique insight and optimize PEC design. Access to cutting-edge spectroscopic tools is critical to this research effort. We have built professionally-staffed facilities equipped with the state-of the-art instrumentation funded by this award. The combination of staff, facilities, and instrumentation specifically tailored for solar fuels research establishes the UNC Solar Energy Research Center Instrumentation Facility as a unique, world-class capability. This congressionally directed project funded the development of two user facilities: TASK 1: SOLAR

  2. NASA Lewis Research Center's materials and structures division

    International Nuclear Information System (INIS)

    Weymueller, C.R.

    1976-01-01

    Research activities at the NASA Lewis Research Center on materials and structures are discussed. Programs are noted on powder metallurgy superalloys, eutectic alloys, dispersion strengthened alloys and composite materials. Discussions are included on materials applications, coatings, fracture mechanics, and fatigue

  3. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  4. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  5. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Weingarten, D.

    1986-01-01

    GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating-point processors arrangedin a modified SIMD architecture. Each has space for 2 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 Gflops. The floating-point processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics

  6. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Denneau, M.; Weingarten, D.

    1985-01-01

    GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating- point processors arranged in a modified SIMD architecture. Each has space for 2 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 Gflops. The floating-point processors are interconnected by a dynamically reconfigurable nonblocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics

  7. Research overview at USDA-ARS Coastal Plains, Soil, Water and Plant Research Center, and potential collaborative research projects with RDA - NIAS

    Science.gov (United States)

    The Center at Florence is one of the ninety research units of the United States Department of Agriculture - Agricultural Research Service (USDA-ARS). The mission of the Center is to conduct research and transfer solutions that improve agricultural production, protect the environment, and enhance the...

  8. Center for risk research: A review of work 1988-1991

    International Nuclear Information System (INIS)

    Sjoeberg, L.

    1992-01-01

    This report gives a summary of the research published during the first 4 years of the Center for Risk Research at the Stockholm School of Economics. Risk research carried out so far at the Center has been concerned with mapping of attitudes and risk perceptions with regard to nuclear risks, AIDS, military flight risks, and economic risks. There has also been some methodological work and some work on the relationship between risk perception and interests

  9. Center for risk research: A review of work 1988-1991

    Energy Technology Data Exchange (ETDEWEB)

    Sjoeberg, L

    1992-01-01

    This report gives a summary of the research published during the first 4 years of the Center for Risk Research at the Stockholm School of Economics. Risk research carried out so far at the Center has been concerned with mapping of attitudes and risk perceptions with regard to nuclear risks, AIDS, military flight risks, and economic risks. There has also been some methodological work and some work on the relationship between risk perception and interests.

  10. Role Strain in University Research Centers

    Science.gov (United States)

    Boardman, Craig; Bozeman, Barry

    2007-01-01

    One way in which university faculty members' professional lives have become more complex with the advent of contemporary university research centers is that many faculty have taken on additional roles. The authors' concern in this article is to determine the extent to which role strain is experienced by university faculty members who are…

  11. Synthesis centers as critical research infrastructure

    Science.gov (United States)

    Baron, Jill S.; Specht, Alison; Garnier, Eric; Bishop, Pamela; Campbell, C. Andrew; Davis, Frank W.; Fady, Bruno; Field, Dawn; Gross, Louis J.; Guru, Siddeswara M.; Halpern, Benjamin S; Hampton, Stephanie E.; Leavitt, Peter R.; Meagher, Thomas R.; Ometto, Jean; Parker, John N.; Price, Richard; Rawson, Casey H.; Rodrigo, Allen; Sheble, Laura A.; Winter, Marten

    2017-01-01

    investment to maximize benefits to science and society is justified. In particular, we argue that synthesis centers represent community infrastructure more akin to research vessels than to term-funded centers of science and technology (e.g., NSF Science and Technology Centers). Through our experience running synthesis centers and, in some cases, developing postfederal funding models, we offer our perspective on the purpose and value of synthesis centers. We present case studies of different outcomes of transition plans and argue for a fundamental shift in the conception of synthesis science and the strategic funding of these centers by government funding agencies.

  12. grid will help physicists' global hunt for particles Researchers have begun running experiments with the MidWest Tier 2 Center, one of five regional computing centers in the US.

    CERN Multimedia

    Ames, Ben

    2006-01-01

    "When physicists at Switzerland's CERN laboratory turn on their newsest particle collider in 2007, they will rely on computer scientists in Chicago and Indianapolis to help sift through the results using a worldwide supercomputing grid." (1/2 page)

  13. Translational Partnership Development Lead | Center for Cancer Research

    Science.gov (United States)

    PROGRAM DESCRIPTION The Frederick National Laboratory for Cancer Research (FNLCR) is a Federally Funded Research and Development Center operated by Leidos Biomedical Research, Inc on behalf of the National Cancer Institute (NCI). The staff of FNLCR support the NCI’s mission in the fight against cancer and HIV/AIDS. Currently we are seeking a Translational Partnership

  14. Technologies and experimental approaches in the NIH Botanical Research Centers

    Science.gov (United States)

    Barnes, Stephen; Birt, Diane F; Cassileth, Barrie R; Cefalu, William T; Chilton, Floyd H; Farnsworth, Norman R; Raskin, Ilya; van Breemen, Richard B; Weaver, Connie M

    2009-01-01

    There are many similarities between research on combinatorial chemistry and natural products and research on dietary supplements and botanicals in the NIH Botanical Research Centers. The technologies in the centers are similar to those used by other NIH-sponsored investigators. All centers rigorously examine the authenticity of botanical dietary supplements and determine the composition and concentrations of the phytochemicals therein, most often by liquid chromatography–mass spectrometry. Several of the centers specialize in fractionation and high-throughput evaluation to identify the individual bioactive agent or a combination of agents. Some centers are using DNA microarray analyses to determine the effects of botanicals on gene transcription with the goal of uncovering the important biochemical pathways they regulate. Other centers focus on bioavailability and uptake, distribution, metabolism, and excretion of the phytochemicals as for all xenobiotics. Because phytochemicals are often complex molecules, synthesis of isotopically labeled forms is carried out by plant cells in culture, followed by careful fractionation. These labeled phytochemicals allow the use of accelerator mass spectrometry to trace the tissue distribution of 14C-labeled proanthocyanidins in animal models of disease. State-of-the-art proteomics and mass spectrometry are also used to identify proteins in selected tissues whose expression and posttranslational modification are influenced by botanicals and dietary supplements. In summary, the skills needed to carry out botanical centers’ research are extensive and may exceed those practiced by most NIH investigators. PMID:18258642

  15. Reorganizing the General Clinical Research Center to improve the clinical and translational research enterprise.

    Science.gov (United States)

    Allen, David; Ripley, Elizabeth; Coe, Antoinette; Clore, John

    2013-12-01

    In 2010, Virginia Commonwealth University (VCU) was granted a Clinical and Translational Science Award which prompted reorganization and expansion of their clinical research infrastructure. A case study approach is used to describe the implementation of a business and cost recovery model for clinical and translational research and the transformation of VCU's General Clinical Research Center and Clinical Trials Office to a combined Clinical Research Services entity. We outline the use of a Plan, Do, Study, Act cycle that facilitated a thoughtful transition process, which included the identification of required changes and cost recovery processes for implementation. Through this process, the VCU Center for Clinical and Translational Research improved efficiency, increased revenue recovered, reduced costs, and brought a high level of fiscal responsibility through financial reporting.

  16. Energy Frontier Research Center, Center for Materials Science of Nuclear Fuels

    International Nuclear Information System (INIS)

    Allen, Todd R.

    2011-01-01

    The Office of Science, Basic Energy Sciences, has funded the INL as one of the Energy Frontier Research Centers in the area of material science of nuclear fuels. This document is the required annual report to the Office of Science that outlines the accomplishments for the period of May 2010 through April 2011. The aim of the Center for Material Science of Nuclear Fuels (CMSNF) is to establish the foundation for predictive understanding of the effects of irradiation-induced defects on thermal transport in oxide nuclear fuels. The science driver of the center's investigation is to understand how complex defect and microstructures affect phonon mediated thermal transport in UO2, and achieve this understanding for the particular case of irradiation-induced defects and microstructures. The center's research thus includes modeling and measurement of thermal transport in oxide fuels with different levels of impurities, lattice disorder and irradiation-induced microstructure, as well as theoretical and experimental investigation of the evolution of disorder, stoichiometry and microstructure in nuclear fuel under irradiation. With the premise that thermal transport in irradiated UO2 is a phonon-mediated energy transport process in a crystalline material with defects and microstructure, a step-by-step approach will be utilized to understand the effects of types of defects and microstructures on the collective phonon dynamics in irradiated UO2. Our efforts under the thermal transport thrust involved both measurement of diffusive phonon transport (an approach that integrates over the entire phonon spectrum) and spectroscopic measurements of phonon attenuation/lifetime and phonon dispersion. Our distinct experimental efforts dovetail with our modeling effort involving atomistic simulation of phonon transport and prediction of lattice thermal conductivity using the Boltzmann transport framework.

  17. Scientific activities 1980 Nuclear Research Center ''Democritos''

    International Nuclear Information System (INIS)

    1982-01-01

    The scientific activities and achievements of the Nuclear Research Center Democritos for the year 1980 are presented in the form of a list of 76 projects giving title, objectives, responsible of each project, developed activities and the pertaining lists of publications. The 16 chapters of this work cover the activities of the main Divisions of the Democritos NRC: Electronics, Biology, Physics, Chemistry, Health Physics, Reactor, Scientific Directorate, Radioisotopes, Environmental Radioactivity, Soil Science, Computer Center, Uranium Exploration, Medical Service, Technological Applications, Radioimmunoassay and Training. (N.C.)

  18. Moving from Damage-Centered Research through Unsettling Reflexivity

    Science.gov (United States)

    Calderon, Dolores

    2016-01-01

    The author revisits autoethnographic work in order to examine how she unwittingly incorporated damage-centered (Tuck 2009) research approaches that reproduce settler colonial understandings of marginalized communities. The paper examines the reproduction of settler colonial knowledge in ethnographic research by unearthing the inherent surveillance…

  19. Current research and development at the Nuclear Research Center Karlsruhe

    International Nuclear Information System (INIS)

    Kuesters, H.

    1982-01-01

    The Nuclear Research Center Karlsruhe (KfK) is funded to 90% by the Federal Republic of Germany and to 10% by the State of Baden-Wuerttemberg. Since its foundation in 1956 the main objective of the Center is research and development (R and D) in the aera of the nuclear technology and about 2/3 of the research capacity is now devoted to this field. Since 1960 a major activity of KfK is R and D work for the design of fast breeder reactors, including material research, physics, and safety investigations; a prototype of 300 MWe is under construction now in the lower Rhine Valley. For enrichment of 235 U fissile material KfK developed the separation nozzle process; its technical application is realized within an international contract between the Federal Republic of Germany and Brazil. Within the frame of the European Programme on fusion technology KfK develops and tests superconducting magnets for toroidal fusion systems; a smaller activity deals with research on inertial confinement fusion. A broad research programme is carried through for safety investigations of nuclear installations, especially for PWRs; this activity is supplemented by research and development in the field of nuclear materials' safeguards. Development of fast reactors has to initiate research for the reprocessing of spent fuel and waste disposal. In the pilot plant WAK spent fuel from LKWs is reprocessed; research especially tries e.g. to improve the PUREX-process by electrochemical means, vitrification of high active waste is another main activity. First studies are being performed now to clarify the necessary development for reprocessing fast reactor fuel. About 1/3 of the research capacity of KfK deals with fundamental research in nuclear physics, solid state physics, biology and studies on the impact of technology on environment. Promising new technologies as e.g. the replacement of gasoline by hydrogen cells as vehicle propulsion are investigated. (orig.)

  20. Staff Clinician | Center for Cancer Research

    Science.gov (United States)

    The Neuro-Oncology Branch (NOB), Center for Cancer Research (CCR), National Cancer Institute (NCI), National Institutes of Health (NIH) is seeking staff clinicians to provide high-quality patient care for individuals with primary central nervous system (CNS) malignancies.  The NOB is comprised of a multidisciplinary team of physicians, healthcare providers, and scientists who

  1. Decommissioning Operations at the Cadarache Nuclear Research Center

    International Nuclear Information System (INIS)

    Gouhier, E.

    2008-01-01

    Among the different activities of the CEA research center of Cadarache, located in the south of France, one of the most important involves decommissioning. As old facilities close, decommissioning activity increases. This presentation will give an overview of the existing organization and the different ongoing decommissioning and cleanup operations on the site. We shall also present some of the new facilities under construction the purpose of which is to replace the decommissioned ones. Cadarache research center was created on October 14, 1959. Today, the activities of the research center are shared out among several technological R and D platforms, essentially devoted to nuclear energy (fission and fusion) Acting as a support to these R and D activities, the center of Cadarache has a platform of services which groups the auxiliary services required by the nuclear facilities and those necessary to the management of nuclear materials, waste, nuclear facility releases and decommissioning. Many old facilities have shut down in recent years (replaced by new facilities) and a whole decommissioning program is now underway involving the dismantling of nuclear reactors (Rapsodie, Harmonie), processing facilities (ATUE uranium treatment facility, LECA UO 2 facility) as well as waste treatment and storage facilities (INB37, INB 56. In conclusion: other dismantling and cleanup operations that are now underway in Cadarache include the following: - Waste treatment and storage facilities, - Historical VLLW and HLW storage facility, - Fissile material storage building, - Historical spent fuel storage facility. Thanks to the project organization: - Costs and risks on these projects can be reduced. - Engineers and technicians can easily move from one project to another. In some cases, when a new facility is under construction for the purpose of replacing a decommissioned one, some of the project team can integrate the new facility as members of the operation team. Today

  2. AHPCRC - Army High Performance Computing Research Center

    Science.gov (United States)

    2010-01-01

    computing. Of particular interest is the ability of a distrib- uted jamming network (DJN) to jam signals in all or part of a sensor or communications net...and reasoning, assistive technologies. FRIEDRICH (FRITZ) PRINZ Finmeccanica Professor of Engineering, Robert Bosch Chair, Department of Engineering...High Performance Computing Research Center www.ahpcrc.org BARBARA BRYAN AHPCRC Research and Outreach Manager, HPTi (650) 604-3732 bbryan@hpti.com Ms

  3. The National Center for Atmospheric Research (NCAR) Research Data Archive: a Data Education Center

    Science.gov (United States)

    Peng, G. S.; Schuster, D.

    2015-12-01

    The National Center for Atmospheric Research (NCAR) Research Data Archive (RDA), rda.ucar.edu, is not just another data center or data archive. It is a data education center. We not only serve data, we TEACH data. Weather and climate data is the original "Big Data" dataset and lessons learned while playing with weather data are applicable to a wide range of data investigations. Erroneous data assumptions are the Achilles heel of Big Data. It doesn't matter how much data you crunch if the data is not what you think it is. Each dataset archived at the RDA is assigned to a data specialist (DS) who curates the data. If a user has a question not answered in the dataset information web pages, they can call or email a skilled DS for further clarification. The RDA's diverse staff—with academic training in meteorology, oceanography, engineering (electrical, civil, ocean and database), mathematics, physics, chemistry and information science—means we likely have someone who "speaks your language." Data discovery is another difficult Big Data problem; one can only solve problems with data if one can find the right data. Metadata, both machine and human-generated, underpin the RDA data search tools. Users can quickly find datasets by name or dataset ID number. They can also perform a faceted search that successively narrows the options by user requirements or simply kick off an indexed search with a few words. Weather data formats can be difficult to read for non-expert users; it's usually packed in binary formats requiring specialized software and parameter names use specialized vocabularies. DSs create detailed information pages for each dataset and maintain lists of helpful software, documentation and links of information around the web. We further grow the level of sophistication of the users with tips, tutorials and data stories on the RDA Blog, http://ncarrda.blogspot.com/. How-to video tutorials are also posted on the NCAR Computational and Information Systems

  4. Patient-centered outcomes research in radiology: trends in funding and methodology.

    Science.gov (United States)

    Lee, Christoph I; Jarvik, Jeffrey G

    2014-09-01

    The creation of the Patient-Centered Outcomes Research Trust Fund and the Patient-Centered Outcomes Research Institute (PCORI) through the Patient Protection and Affordable Care Act of 2010 presents new opportunities for funding patient-centered comparative effectiveness research (CER) in radiology. We provide an overview of the evolution of federal funding and priorities for CER with a focus on radiology-related priority topics over the last two decades, and discuss the funding processes and methodological standards outlined by PCORI. We introduce key paradigm shifts in research methodology that will be required on the part of radiology health services researchers to obtain competitive federal grant funding in patient-centered outcomes research. These paradigm shifts include direct engagement of patients and other stakeholders at every stage of the research process, from initial conception to dissemination of results. We will also discuss the increasing use of mixed methods and novel trial designs. One of these trial designs, the pragmatic trial, has the potential to be readily applied to evaluating the effectiveness of diagnostic imaging procedures and imaging-based interventions among diverse patient populations in real-world settings. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  5. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Denneau, M.; Weingarten, D.

    1985-01-01

    GF11 is a parallel computer currently under construction at the Yorktown Research Center. The machine incorporates 576 floating-point processors arranged in a modified SIMD architecture. Each processor has space for 2 Mbytes of memory and is capable of 20 MFLOPS, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 GFLOPS. The floating-point processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics, a proposed theory of the elementary particles which participate in nuclear interactions

  6. NASA Space Engineering Research Center for VLSI systems design

    Science.gov (United States)

    1991-01-01

    This annual review reports the center's activities and findings on very large scale integration (VLSI) systems design for 1990, including project status, financial support, publications, the NASA Space Engineering Research Center (SERC) Symposium on VLSI Design, research results, and outreach programs. Processor chips completed or under development are listed. Research results summarized include a design technique to harden complementary metal oxide semiconductors (CMOS) memory circuits against single event upset (SEU); improved circuit design procedures; and advances in computer aided design (CAD), communications, computer architectures, and reliability design. Also described is a high school teacher program that exposes teachers to the fundamentals of digital logic design.

  7. High power electromagnetic propulsion research at the NASA Glenn Research Center

    International Nuclear Information System (INIS)

    LaPointe, Michael R.; Sankovic, John M.

    2000-01-01

    Interest in megawatt-class electromagnetic propulsion has been rekindled to support newly proposed high power orbit transfer and deep space mission applications. Electromagnetic thrusters can effectively process megawatts of power to provide a range of specific impulse values to meet diverse in-space propulsion requirements. Potential applications include orbit raising for the proposed multi-megawatt Space Solar Power Satellite and other large commercial and military space platforms, lunar and interplanetary cargo missions in support of the NASA Human Exploration and Development of Space strategic enterprise, robotic deep space exploration missions, and near-term interstellar precursor missions. As NASA's lead center for electric propulsion, the Glenn Research Center is developing a number of high power electromagnetic propulsion technologies to support these future mission applications. Program activities include research on MW-class magnetoplasmadynamic thrusters, high power pulsed inductive thrusters, and innovative electrodeless plasma thruster concepts. Program goals are highlighted, the status of each research area is discussed, and plans are outlined for the continued development of efficient, robust high power electromagnetic thrusters

  8. Patient-centered prioritization of bladder cancer research.

    Science.gov (United States)

    Smith, Angela B; Chisolm, Stephanie; Deal, Allison; Spangler, Alejandra; Quale, Diane Z; Bangs, Rick; Jones, J Michael; Gore, John L

    2018-05-04

    Patient-centered research requires the meaningful involvement of patients and caregivers throughout the research process. The objective of this study was to create a process for sustainable engagement for research prioritization within oncology. From December 2014 to 2016, a network of engaged patients for research prioritization was created in partnership with the Bladder Cancer Advocacy Network (BCAN): the BCAN Patient Survey Network (PSN). The PSN leveraged an online bladder cancer community with additional recruitment through print advertisements and social media campaigns. Prioritized research questions were developed through a modified Delphi process and were iterated through multidisciplinary working groups and a repeat survey. In year 1 of the PSN, 354 patients and caregivers responded to the research prioritization survey; the number of responses increased to 1034 in year 2. The majority of respondents had non-muscle-invasive bladder cancer (NMIBC), and the mean time since diagnosis was 5 years. Stakeholder-identified questions for noninvasive, invasive, and metastatic disease were prioritized by the PSN. Free-text questions were sorted with thematic mapping. Several questions submitted by respondents were among the prioritized research questions. A final prioritized list of research questions was disseminated to various funding agencies, and a highly ranked NMIBC research question was included as a priority area in the 2017 Patient-Centered Outcomes Research Institute announcement of pragmatic trial funding. Patient engagement is needed to identify high-priority research questions in oncology. The BCAN PSN provides a successful example of an engagement infrastructure for annual research prioritization in bladder cancer. The creation of an engagement network sets the groundwork for additional phases of engagement, including design, conduct, and dissemination. Cancer 2018. © 2018 American Cancer Society. © 2018 American Cancer Society.

  9. Research Center Renaming Will Honor Senator Domenici

    Science.gov (United States)

    2008-05-01

    New Mexico Tech and the National Radio Astronomy Observatory (NRAO) will rename the observatory's research center on the New Mexico Tech campus to honor retiring U.S. Senator Pete V. Domenici in a ceremony on May 30. The building that serves as the scientific, technical, and administrative center for the Very Large Array (VLA) and Very Long Baseline Array (VLBA) radio telescopes will be named the "Pete V. Domenici Science Operations Center." The building previously was known simply as the "Array Operations Center." Sen. Pete V. Domenici Sen. Pete V. Domenici "The new name recognizes the strong and effective support for science that has been a hallmark of Senator Domenici's long career in public service," said Dr. Fred Lo, NRAO Director. New Mexico Tech President Daniel H. Lopez said Sen. Domenici has always been a supporter of science and research in Socorro and throughout the state. "He's been a statesman for New Mexico, the nation -- and without exaggeration -- for the world," Lopez said. "Anyone with that track record deserves this recognition." Van Romero, Tech vice president of research and economic development, has served as the university's main lobbyist in Washington, D.C., for more than a decade. He said Sen. Domenici has always been receptive to new ideas and willing to take risks. "Over the years, Sen. Domenici has always had time to listen to our needs and goals," Romero said. "He has served as a champion of New Mexico Tech's causes and we owe him a debt of gratitude for all his efforts over the decades." Originally dedicated in 1988, the center houses offices and laboratories that support VLA and VLBA operations. The center also supports work on the VLA modernization project and on the international Atacama Large Millimeter/submillimeter Array (ALMA) project. Work on ALMA at the Socorro center and at the ALMA Test Facility at the VLA site west of Socorro has focused on developing and testing equipment to be deployed at the ALMA site in Chile's Atacama

  10. Center for Fuel Cell Research and Applications development phase. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-12-01

    The deployment and operation of clean power generation is becoming critical as the energy and transportation sectors seek ways to comply with clean air standards and the national deregulation of the utility industry. However, for strategic business decisions, considerable analysis is required over the next few years to evaluate the appropriate application and value added from this emerging technology. To this end the Houston Advanced Research Center (HARC) is proposing a three-year industry-driven project that centers on the creation of ``The Center for Fuel Cell Research and Applications.`` A collaborative laboratory housed at and managed by HARC, the Center will enable a core group of six diverse participating companies--industry participants--to investigate the economic and operational feasibility of proton-exchange-membrane (PEM) fuel cells in a variety of applications (the core project). This document describes the unique benefits of a collaborative approach to PEM applied research, among them a shared laboratory concept leading to cost savings and shared risks as well as access to outstanding research talent and lab facilities. It also describes the benefits provided by implementing the project at HARC, with particular emphasis on HARC`s history of managing successful long-term research projects as well as its experience in dealing with industry consortia projects. The Center is also unique in that it will not duplicate the traditional university role of basic research or that of the fuel cell industry in developing commercial products. Instead, the Center will focus on applications, testing, and demonstration of fuel cell technology.

  11. Nuclear Research Center IRT reactor dynamics calculation

    International Nuclear Information System (INIS)

    Aleman Fernandez, J.R.

    1990-01-01

    The main features of the code DIRT, for dynamical calculations are described in the paper. With the results obtained by the program, an analysis of the dynamic behaviour of the Research Reactor IRT of the Nuclear Research Center (CIN) is performed. Different transitories were considered such as variation of the system reactivity, coolant inlet temperature variation and also variations of the coolant velocity through the reactor core. 3 refs

  12. Electron Microscopy-Data Analysis Specialist | Center for Cancer Research

    Science.gov (United States)

    PROGRAM DESCRIPTION The Cancer Research Technology Program (CRTP) develops and implements emerging technology, cancer biology expertise and research capabilities to accomplish NCI research objectives.  The CRTP is an outward-facing, multi-disciplinary hub purposed to enable the external cancer research community and provides dedicated support to NCI’s intramural Center for

  13. Actions Needed to Ensure Scientific and Technical Information is Adequately Reviewed at Goddard Space Flight Center, Johnson Space Center, Langley Research Center, and Marshall Space Flight Center

    Science.gov (United States)

    2008-01-01

    This audit was initiated in response to a hotline complaint regarding the review, approval, and release of scientific and technical information (STI) at Johnson Space Center. The complainant alleged that Johnson personnel conducting export control reviews of STI were not fully qualified to conduct those reviews and that the reviews often did not occur until after the STI had been publicly released. NASA guidance requires that STI, defined as the results of basic and applied scientific, technical, and related engineering research and development, undergo certain reviews prior to being released outside of NASA or to audiences that include foreign nationals. The process includes technical, national security, export control, copyright, and trade secret (e.g., proprietary data) reviews. The review process was designed to preclude the inappropriate dissemination of sensitive information while ensuring that NASA complies with a requirement of the National Aeronautics and Space Act of 1958 (the Space Act)1 to provide for the widest practicable and appropriate dissemination of information resulting from NASA research activities. We focused our audit on evaluating the STI review process: specifically, determining whether the roles and responsibilities for the review, approval, and release of STI were adequately defined and documented in NASA and Center-level guidance and whether that guidance was effectively implemented at Goddard Space Flight Center, Johnson Space Center, Langley Research Center, and Marshall Space Flight Center. Johnson was included in the review because it was the source of the initial complaint, and Goddard, Langley, and Marshall were included because those Centers consistently produce significant amounts of STI.

  14. Results of the Community Health Applied Research Network (CHARN) National Research Capacity Survey of Community Health Centers.

    Science.gov (United States)

    Song, Hui; Li, Vivian; Gillespie, Suzanne; Laws, Reesa; Massimino, Stefan; Nelson, Christine; Singal, Robbie; Wagaw, Fikirte; Jester, Michelle; Weir, Rosy Chang

    2015-01-01

    The mission of the Community Health Applied Research Network (CHARN) is to build capacity to carry out Patient-Centered Outcomes Research at community health centers (CHCs), with the ultimate goal to improve health care for vulnerable populations. The CHARN Needs Assessment Staff Survey investigates CHCs' involvement in research, as well as their need for research training and resources. Results will be used to guide future training. The survey was developed and implemented in partnership with CHARN CHCs. Data were collected across CHARN CHCs. Data analysis and reports were conducted by the CHARN data coordinating center (DCC). Survey results highlighted gaps in staff research training, and these gaps varied by staff role. There is considerable variation in research involvement, partnerships, and focus both within and across CHCs. Development of training programs to increase research capacity should be tailored to address the specific needs and roles of staff involved in research.

  15. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  16. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; Kumar, Jitendra [ORNL; Mills, Richard T. [Argonne National Laboratory; Hoffman, Forrest M. [ORNL; Sripathi, Vamsi [Intel Corporation; Hargrove, William Walter [United States Department of Agriculture (USDA), United States Forest Service (USFS)

    2017-09-01

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like the Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.

  17. Project 'European Research Center for Air Pollution Abatement Measures'

    International Nuclear Information System (INIS)

    1985-04-01

    During the 5-7th of March 1985 the first status report of the project 'European Research Center for Air Pollution Control Measures' took place in the Nuclear Research Center, Karlsruhe. Progress reports on the following topics assessment and analysis of the impacts of airborne pollutants on forest trees; distinction from other potential causes of recent forest dieback, research into atmospheric dispersion, conversion and deposition of airborne pollutants, development and optimization of industrial-technical processes to reduce or avoid emissions and providing instruments and making recommendations to the industrial and political sectors were presented. This volume is a collection of the work reported there. 42 papers were entered separately. (orig./MG) [de

  18. Proceedings of the fourth symposium of large data management for creative research

    International Nuclear Information System (INIS)

    Ueshima, Yutaka

    2004-03-01

    This report consists of 10 contributed papers of the Fourth Symposium of Large Data Management for Creative Research, which was held at the JAERI Advanced Photon Research Center in Kyoto on September 2-4, 2002. The aim of the symposium is for private sector and public research organization researchers to report on the latest research and technology developments and perform information exchange about large data treatment, experiments with visualization and large data management as a support the base for research. The contents of the symposium are speeches, panel-discussions, the laboratory, supercomputer and photon science museum annex tours. There were seven private sector speeches and ten university and research organization speeches. There were seventeen speeches in total. A total of 117 people participated including 95 participants from other than JAERI. The symposium showed the present condition and view of large data management technology which is important for computer science, advanced photon research and became a valuable forum from the stand point as an indicator for future research. The 5 of the presented papers are indexed individually. (J.P.N.)

  19. NASA Airline Operations Research Center

    Science.gov (United States)

    Mogford, Richard H.

    2016-01-01

    This is a PowerPoint presentation NASA airline operations center (AOC) research. It includes information on using IBM Watson in the AOC. It also reviews a dispatcher decision support tool call the Flight Awareness Collaboration Tool (FACT). FACT gathers information about winter weather onto one screen and includes predictive abilities. It should prove to be useful for airline dispatchers and airport personnel when they manage winter storms and their effect on air traffic. This material is very similar to other previously approved presentations with the same title.

  20. Establishing a national research center on day care

    DEFF Research Database (Denmark)

    Ellegaard, Tomas

    The paper presents and discusses the current formation of a national research center on ECEC. The center is currently being established. It is partly funded by the Danish union of early childhood and youth educators. It is based on cooperation between a number of Danish universities and this nati...... current new public management policies. However there is also more conflicting issues that emerge in this enterprise – especially on interests, practice relevance and knowledge paradigms....

  1. The Center for Frontiers of Subsurface Energy Security (A 'Life at the Frontiers of Energy Research' contest entry from the 2011 Energy Frontier Research Centers (EFRCs) Summit and Forum)

    International Nuclear Information System (INIS)

    Pope, Gary A.

    2011-01-01

    'The Center for Frontiers of Subsurface Energy Security (CFSES)' was submitted to the 'Life at the Frontiers of Energy Research' video contest at the 2011 Science for Our Nation's Energy Future: Energy Frontier Research Centers (EFRCs) Summit and Forum. Twenty-six EFRCs created short videos to highlight their mission and their work. CFSES is directed by Gary A. Pope at the University of Texas at Austin and partners with Sandia National Laboratories. The Office of Basic Energy Sciences in the U.S. Department of Energy's Office of Science established the 46 Energy Frontier Research Centers (EFRCs) in 2009. These collaboratively-organized centers conduct fundamental research focused on 'grand challenges' and use-inspired 'basic research needs' recently identified in major strategic planning efforts by the scientific community. The overall purpose is to accelerate scientific progress toward meeting the nation's critical energy challenges.

  2. Applied Physics Research at the Idaho Accelerator Center

    International Nuclear Information System (INIS)

    Date, D. S.; Hunt, A. W.; Chouffani, K.; Wells, D. P.

    2011-01-01

    The Idaho Accelerator Center, founded in 1996 and based at Idaho State University, supports research, education, and high technology economic development in the United States. The research center currently has eight electron linear accelerators ranging in energy from 6 to 44 MeV with the latter linear accelerator capable of picosecond pulses, a 2 MeV positive-ion Van de Graaff, a 4 MV Nec tandem Pelletron, and a pulsed-power 8 k A, 10 MeV electron induction accelerator. Current research emphases include, accelerator physics research, accelerator based medical isotope production, active interrogation techniques for homeland security and nuclear nonproliferation applications, non destructive testing and materials science studies in support of industry as well as the development of advanced nuclear fuels, pure and applied radio-biology, and medical physics. This talk will highlight three of these areas including the production of the isotopes 99 Tc and 67 Cu for medical diagnostics and therapy, as well as two new technologies currently under development for nuclear safeguards and homeland security - namely laser Compton scattering and the polarized photofission of actinides

  3. Center for Urban Environmental Research and Education (CUERE)

    Data.gov (United States)

    Federal Laboratory Consortium — The Center for Urban Environmental Research and Education (CUERE) at UMBC was created in 2001 with initial support from the U.S. Environmental Protection Agency and...

  4. Research priorities for a multi-center child abuse pediatrics network - CAPNET.

    Science.gov (United States)

    Lindberg, Daniel M; Wood, Joanne N; Campbell, Kristine A; Scribano, Philip V; Laskey, Antoinette; Leventhal, John M; Pierce, Mary Clyde; Runyan, Desmond K

    2017-03-01

    Although child maltreatment medical research has benefited from several multi-center studies, the new specialty of child abuse pediatrics has not had a sustainable network capable of pursuing multiple, prospective, clinically-oriented studies. The Child Abuse Pediatrics Network (CAPNET) is a new multi-center research network dedicated to child maltreatment medical research. In order to establish a relevant, practical research agenda, we conducted a modified Delphi process to determine the topic areas with highest priority for such a network. Research questions were solicited from members of the Ray E. Helfer Society and study authors and were sorted into topic areas. These topic areas were rated for priority using iterative rounds of ratings and in-person meetings. The topics rated with the highest priority were missed diagnosis and selected/indicated prevention. This agenda can be used to target future multi-center child maltreatment medical research. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. 48 CFR 1335.017 - Federal funded research and development centers.

    Science.gov (United States)

    2010-10-01

    ... OF COMMERCE SPECIAL CATEGORIES OF CONTRACTING RESEARCH AND DEVELOPMENT CONTRACTING 1335.017 Federal funded research and development centers. ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Federal funded research...

  6. Staff Scientist - RNA Bioinformatics | Center for Cancer Research

    Science.gov (United States)

    The newly established RNA Biology Laboratory (RBL) at the Center for Cancer Research (CCR), National Cancer Institute (NCI), National Institutes of Health (NIH) in Frederick, Maryland is recruiting a Staff Scientist with strong expertise in RNA bioinformatics to join the Intramural Research Program’s mission of high impact, high reward science. The RBL is the equivalent of an

  7. A RESEARCH REPORT ON OPERATIONAL PLANS FOR DEVELOPING REGIONAL EDUCATIONAL MEDIA RESEARCH CENTERS.

    Science.gov (United States)

    CARPENTER, C.R.; AND OTHERS

    THE NEED AND FEASIBILITY OF ESTABLISHING A NUMBER OF "REGIONAL EDUCATIONAL MEDIA RESEARCH CENTERS WITH A PROGRAMMATIC ORIENTATION" WERE INVESTIGATED. A PLANNING GROUP WAS ESTABLISHED TO SERVE AS A STEERING COMMITTEE. CONFERENCES IN WHICH GROUPS IN RESEARCH AND EDUCATION IN WIDELY DISTRIBUTED REGIONS OF THE COUNTRY PARTICIPATED WERE HELD…

  8. Northwest Hazardous Waste Research, Development, and Demonstration Center: Program Plan

    International Nuclear Information System (INIS)

    1988-02-01

    The Northwest Hazardous Waste Research, Development, and Demonstration Center was created as part of an ongoing federal effort to provide technologies and methods that protect human health and welfare and environment from hazardous wastes. The Center was established by the Superfund Amendments and Reauthorization Act (SARA) to develop and adapt innovative technologies and methods for assessing the impacts of and remediating inactive hazardous and radioactive mixed-waste sites. The Superfund legislation authorized $10 million for Pacific Northwest Laboratory to establish and operate the Center over a 5-year period. Under this legislation, Congress authorized $10 million each to support research, development, and demonstration (RD and D) on hazardous and radioactive mixed-waste problems in Idaho, Montana, Oregon, and Washington, including the Hanford Site. In 1987, the Center initiated its RD and D activities and prepared this Program Plan that presents the framework within which the Center will carry out its mission. Section 1.0 describes the Center, its mission, objectives, organization, and relationship to other programs. Section 2.0 describes the Center's RD and D strategy and contains the RD and D objectives, priorities, and process to be used to select specific projects. Section 3.0 contains the Center's FY 1988 operating plan and describes the specific RD and D projects to be carried out and their budgets and schedules. 9 refs., 18 figs., 5 tabs

  9. Final priority; National Institute on Disability and Rehabilitation Research--Rehabilitation Engineering Research Centers. Final priority.

    Science.gov (United States)

    2014-07-09

    The Assistant Secretary for Special Education and Rehabilitative Services announces a priority under the Disability and Rehabilitation Research Projects and Centers Program administered by the National Institute on Disability and Rehabilitation Research (NIDRR). Specifically, we announce a priority for a Rehabilitation Engineering Research Center (RERC) on Improving the Accessibility, Usability, and Performance of Technology for Individuals who are Deaf or Hard of Hearing. The Assistant Secretary may use this priority for competitions in fiscal year (FY) 2014 and later years. We take this action to focus research attention on an area of national need. We intend the priority to contribute to improving the accessibility, usability, and performance of technology for individuals who are deaf or hard of hearing.

  10. Integration of Titan supercomputer at OLCF with ATLAS Production System

    Science.gov (United States)

    Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to

  11. Qualitative Methods in Patient-Centered Outcomes Research.

    Science.gov (United States)

    Vandermause, Roxanne; Barg, Frances K; Esmail, Laura; Edmundson, Lauren; Girard, Samantha; Perfetti, A Ross

    2017-02-01

    The Patient-Centered Outcomes Research Institute (PCORI), created to fund research guided by patients, caregivers, and the broader health care community, offers a new research venue. Many (41 of 50) first funded projects involved qualitative research methods. This study was completed to examine the current state of the science of qualitative methodologies used in PCORI-funded research. Principal investigators participated in phenomenological interviews to learn (a) how do researchers using qualitative methods experience seeking funding for, implementing and disseminating their work; and (b) how may qualitative methods advance the quality and relevance of evidence for patients? Results showed the experience of doing qualitative research in the current research climate as "Being a bona fide qualitative researcher: Staying true to research aims while negotiating challenges," with overlapping patterns: (a) researching the elemental, (b) expecting surprise, and (c) pushing boundaries. The nature of qualitative work today was explicitly described and is rendered in this article.

  12. Use of QUADRICS supercomputer as embedded simulator in emergency management systems

    International Nuclear Information System (INIS)

    Bove, R.; Di Costanzo, G.; Ziparo, A.

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system

  13. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Science.gov (United States)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  14. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Directory of Open Access Journals (Sweden)

    DeTar Carleton

    2018-01-01

    Full Text Available With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  15. An Overview of the Antenna Measurement Facilities at the NASA Glenn Research Center

    Science.gov (United States)

    Lambert, Kevin M.; Anzic, Godfrey; Zakrajsek, Robert J.; Zaman, Afroz J.

    2002-10-01

    For the past twenty years, the NASA Glenn Research Center (formerly Lewis Research Center) in Cleveland, Ohio, has developed and maintained facilities for the evaluation of antennas. This effort has been in support of the work being done at the center in the research and development of space communication systems. The wide variety of antennas that have been considered for these systems resulted in a need for several types of antenna ranges at the Glenn Research Center. Four ranges, which are part of the Microwave Systems Laboratory, are the responsibility of the staff of the Applied RF Technology Branch. A general description of these ranges is provided in this paper.

  16. Electron Microscopist | Center for Cancer Research

    Science.gov (United States)

    PROGRAM DESCRIPTION The Cancer Research Technology Program (CRTP) develops and implements emerging technology, cancer biology expertise and research capabilities to accomplish NCI research objectives. The CRTP is an outward-facing, multi-disciplinary hub purposed to enable the external cancer research community and provides dedicated support to NCI’s intramural Center for Cancer Research (CCR). The dedicated units provide electron microscopy, protein characterization, protein expression, optical microscopy and genetics. These research efforts are an integral part of CCR at the Frederick National Laboratory for Cancer Research (FNLCR). CRTP scientists also work collaboratively with intramural NCI investigators to provide research technologies and expertise. KEY ROLES/RESPONSIBILITIES - THIS POSITION IS CONTINGENT UPON FUNDING APPROVAL The Electron Microscopist will: Operate ultramicrotomes (Leica) and other instrumentation related to the preparation of embedded samples for EM (TEM and SEM) Operate TEM microscopes, (specifically Hitachi, FEI T20 and FEI T12) as well as SEM microscopes (Hitachi); task will include loading samples, screening, and performing data collection for a variety of samples: from cells to proteins Manage maintenance for the TEM and SEM microscopes Provide technical advice to investigators on sample preparation and data collection

  17. SUPERCOMPUTER SIMULATION OF CRITICAL PHENOMENA IN COMPLEX SOCIAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Petrus M.A. Sloot

    2014-09-01

    Full Text Available The paper describes a problem of computer simulation of critical phenomena in complex social systems on a petascale computing systems in frames of complex networks approach. The three-layer system of nested models of complex networks is proposed including aggregated analytical model to identify critical phenomena, detailed model of individualized network dynamics and model to adjust a topological structure of a complex network. The scalable parallel algorithm covering all layers of complex networks simulation is proposed. Performance of the algorithm is studied on different supercomputing systems. The issues of software and information infrastructure of complex networks simulation are discussed including organization of distributed calculations, crawling the data in social networks and results visualization. The applications of developed methods and technologies are considered including simulation of criminal networks disruption, fast rumors spreading in social networks, evolution of financial networks and epidemics spreading.

  18. Federal Research: Opportunities Exist to Improve the Management and Oversight of Federally Funded Research and Development Centers

    National Research Council Canada - National Science Library

    Woods, William; Mittal, Anu; Neumann, John; Williams, Cheryl; Candon, Sharron; Sterling, Suzanne; Wade, Jacqueline; Zwanzig, Peter

    2008-01-01

    .... FFRDCs -- including laboratories, studies and analyses centers, and systems engineering centers -- conduct research in military space programs, nanotechnology, microelectronics, nuclear warfare...

  19. Present status and future plans of the National Atomic Research Center of Malaysia

    International Nuclear Information System (INIS)

    Rashid, N.K.

    1980-01-01

    The Malaysian Atomic Research Center (PUSPATI) was established in 1972 and operates under the Ministry of Science, Technology and the Environment. It is the first research center of this kind in Malaysia. Some of the objectives of this center are: operation and maintenance of the research reactor; research and development in reactor science and technology; production of short-lived radioisotopes for use in medicine, agriculture and industry; coordination of the utilization of the reactor and its experimental facilities among the various research institutes and universities; training in nuclear radiation field; personnel monitoring and environmental surveillance

  20. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  1. Applied high-speed imaging for the icing research program at NASA Lewis Research Center

    Science.gov (United States)

    Slater, Howard; Owens, Jay; Shin, Jaiwon

    1992-01-01

    The Icing Research Tunnel at NASA Lewis Research Center provides scientists a scaled, controlled environment to simulate natural icing events. The closed-loop, low speed, refrigerated wind tunnel offers the experimental capability to test for icing certification requirements, analytical model validation and calibration techniques, cloud physics instrumentation refinement, advanced ice protection systems, and rotorcraft icing methodology development. The test procedures for these objectives all require a high degree of visual documentation, both in real-time data acquisition and post-test image processing. Information is provided to scientific, technical, and industrial imaging specialists as well as to research personnel about the high-speed and conventional imaging systems will be on the recent ice protection technology program. Various imaging examples for some of the tests are presented. Additional imaging examples are available from the NASA Lewis Research Center's Photographic and Printing Branch.

  2. Using curriculum vitae to compare some impacts of NSF research grants with research center funding

    OpenAIRE

    Monica Gaughan; Barry Bozeman

    2002-01-01

    While traditional grants remain central in US federal support of academic scientists and engineers, the role of multidisciplinary NSF Centers is growing. Little is known about how funding through these Centers affects scientific output or (as is an NSF aim) increases academic collaboration with industry. This paper tests the use of CVs to examine how Center funding affects researchers' publication rates and their obtaining industry grants. Copyright , Beech Tree Publishing.

  3. Scientist, Single Cell Analysis Facility | Center for Cancer Research

    Science.gov (United States)

    The Cancer Research Technology Program (CRTP) develops and implements emerging technology, cancer biology expertise and research capabilities to accomplish NCI research objectives.  The CRTP is an outward-facing, multi-disciplinary hub purposed to enable the external cancer research community and provides dedicated support to NCI’s intramural Center for Cancer Research (CCR).  The dedicated units provide electron microscopy, protein characterization, protein expression, optical microscopy and nextGen sequencing. These research efforts are an integral part of CCR at the Frederick National Laboratory for Cancer Research (FNLCR).  CRTP scientists also work collaboratively with intramural NCI investigators to provide research technologies and expertise. KEY ROLES AND RESPONSIBILITIES We are seeking a highly motivated Scientist II to join the newly established Single Cell Analysis Facility (SCAF) of the Center for Cancer Research (CCR) at NCI. The SCAF will house state of the art single cell sequencing technologies including 10xGenomics Chromium, BD Genomics Rhapsody, DEPPArray, and other emerging single cell technologies. The Scientist: Will interact with close to 200 laboratories within the CCR to design and carry out single cell experiments for cancer research Will work on single cell isolation/preparation from various tissues and cells and related NexGen sequencing library preparation Is expected to author publications in peer reviewed scientific journals

  4. Research Problems in Data Curation: Outcomes from the Data Curation Education in Research Centers Program

    Science.gov (United States)

    Palmer, C. L.; Mayernik, M. S.; Weber, N.; Baker, K. S.; Kelly, K.; Marlino, M. R.; Thompson, C. A.

    2013-12-01

    The need for data curation is being recognized in numerous institutional settings as national research funding agencies extend data archiving mandates to cover more types of research grants. Data curation, however, is not only a practical challenge. It presents many conceptual and theoretical challenges that must be investigated to design appropriate technical systems, social practices and institutions, policies, and services. This presentation reports on outcomes from an investigation of research problems in data curation conducted as part of the Data Curation Education in Research Centers (DCERC) program. DCERC is developing a new model for educating data professionals to contribute to scientific research. The program is organized around foundational courses and field experiences in research and data centers for both master's and doctoral students. The initiative is led by the Graduate School of Library and Information Science at the University of Illinois at Urbana-Champaign, in collaboration with the School of Information Sciences at the University of Tennessee, and library and data professionals at the National Center for Atmospheric Research (NCAR). At the doctoral level DCERC is educating future faculty and researchers in data curation and establishing a research agenda to advance the field. The doctoral seminar, Research Problems in Data Curation, was developed and taught in 2012 by the DCERC principal investigator and two doctoral fellows at the University of Illinois. It was designed to define the problem space of data curation, examine relevant concepts and theories related to both technical and social perspectives, and articulate research questions that are either unexplored or under theorized in the current literature. There was a particular emphasis on the Earth and environmental sciences, with guest speakers brought in from NCAR, National Snow and Ice Data Center (NSIDC), and Rensselaer Polytechnic Institute. Through the assignments, students

  5. Symbolic simulation of engineering systems on a supercomputer

    International Nuclear Information System (INIS)

    Ragheb, M.; Gvillo, D.; Makowitz, H.

    1986-01-01

    Model-Based Production-Rule systems for analysis are developed for the symbolic simulation of Complex Engineering systems on a CRAY X-MP Supercomputer. The Fault-Tree and Event-Tree Analysis methodologies from Systems-Analysis are used for problem representation and are coupled to the Rule-Based System Paradigm from Knowledge Engineering to provide modelling of engineering devices. Modelling is based on knowledge of the structure and function of the device rather than on human expertise alone. To implement the methodology, we developed a production-Rule Analysis System that uses both backward-chaining and forward-chaining: HAL-1986. The inference engine uses an Induction-Deduction-Oriented antecedent-consequent logic and is programmed in Portable Standard Lisp (PSL). The inference engine is general and can accommodate general modifications and additions to the knowledge base. The methodologies used will be demonstrated using a model for the identification of faults, and subsequent recovery from abnormal situations in Nuclear Reactor Safety Analysis. The use of the exposed methodologies for the prognostication of future device responses under operational and accident conditions using coupled symbolic and procedural programming is discussed

  6. Research Summaries: The 11th Biennial Rivkin Center Ovarian Cancer Research Symposium.

    Science.gov (United States)

    Armstrong, Deborah K

    2017-11-01

    In September 2016, the 11th biennial ovarian cancer research symposium was presented by the Rivkin Center for Ovarian Cancer and the American Association for Cancer Research. The 2016 symposium focused on 4 broad areas of research: Mechanisms of Initiation and Progression of Ovarian Cancer, Tumor Microenvironment and Models of Ovarian Cancer, Detection and Prevention of Ovarian Cancer, and Novel Therapeutics for Ovarian Cancer. The presentations and abstracts from each of these areas are reviewed in this supplement to the International Journal of Gynecologic Oncology.

  7. 48 CFR 3035.017 - Federally Funded Research and Development Centers.

    Science.gov (United States)

    2010-10-01

    ... CONTRACTING RESEARCH AND DEVELOPMENT CONTRACTING Scope of Part 3035.017 Federally Funded Research and... use of Federally Funded Research and Development Centers (FFRDCs) in (FAR) 48 CFR 35.017. [71 FR 25771... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Federally Funded Research...

  8. 75 FR 59720 - Methodology Committee of the Patient-Centered Outcomes Research Institute (PCORI)

    Science.gov (United States)

    2010-09-28

    ... GOVERNMENT ACCOUNTABILITY OFFICE Methodology Committee of the Patient-Centered Outcomes Research... responsibility for appointing not more than 15 members to a Methodology Committee of the Patient- Centered Outcomes Research Institute. In addition, the Directors of the Agency for Healthcare Research and Quality...

  9. 34 CFR 350.30 - What requirements must a Rehabilitation Engineering Research Center meet?

    Science.gov (United States)

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false What requirements must a Rehabilitation Engineering... DISABILITY AND REHABILITATION RESEARCH PROJECTS AND CENTERS PROGRAM What Rehabilitation Engineering Research Centers Does the Secretary Assist? § 350.30 What requirements must a Rehabilitation Engineering Research...

  10. Dal CERN, flusso si dati a una media di 600 megabytes al secondo per dieci giorni consecutivi

    CERN Multimedia

    2005-01-01

    The supercomputer Grid took up successfully its first technologic challenge. Egiht supercomputing centers have supported on internet a continuous flow of data from CERN in Geneva and directed them to seven centers in Europe and United States

  11. List of scientific publications, Nuclear Research Center Karlsruhe 1984

    International Nuclear Information System (INIS)

    1985-04-01

    The report abstracted contains a list of works published in 1984. Papers not in print yet are listed separately. Patent entries take account of all patent rights granted or published in 1984, i.e. patents or patent specifications. The list of publications is classified by institutes. The project category lists but the respective reports and studies carried out and published by members of the project staff concerned. Also listed are publications related to research and development projects of the 'product engineering project' (PFT/Projekt 'Fertigungstechnik'). With different companies and institutes cooperating, PFT is sponsored by Nuclear Research Center Karlsruhe GmbH. The latter is also responsible for printing above publications. Moreover the list contains the publications of a branch of the Bundesforschungsanstalt fuer Ernaehrung which is located on the KfK-premises. The final chapter of the list summarizes publications dealing with guest-experiments and research at Nuclear Research Center Karlsruhe. (orig./PW) [de

  12. The Strategic Electrochemical Research Center in Denmark

    DEFF Research Database (Denmark)

    Mogensen, Mogens Bjerg; Hansen, Karin Vels

    2011-01-01

    A 6-year strategic electrochemistry research center (SERC) in fundamental and applied aspects of electrochemical cells with a main emphasis on solid oxide cells was started in Denmark on January 1st, 2007 in cooperation with other Danish and Swedish Universities. Furthermore, 8 Danish companies...... are presented. ©2011 COPYRIGHT ECS - The Electrochemical Society...

  13. Developmental Scientist | Center for Cancer Research

    Science.gov (United States)

    PROGRAM DESCRIPTION Within the Leidos Biomedical Research Inc.’s Clinical Research Directorate, the Clinical Monitoring Research Program (CMRP) provides high-quality comprehensive and strategic operational support to the high-profile domestic and international clinical research initiatives of the National Cancer Institute (NCI), National Institute of Allergy and Infectious Diseases (NIAID), Clinical Center (CC), National Institute of Heart, Lung and Blood Institute (NHLBI), National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), National Center for Advancing Translational Sciences (NCATS), National Institute of Neurological Disorders and Stroke (NINDS), and the National Institute of Mental Health (NIMH). Since its inception in 2001, CMRP’s ability to provide rapid responses, high-quality solutions, and to recruit and retain experts with a variety of backgrounds to meet the growing research portfolios of NCI, NIAID, CC, NHLBI, NIAMS, NCATS, NINDS, and NIMH has led to the considerable expansion of the program and its repertoire of support services. CMRP’s support services are strategically aligned with the program’s mission to provide comprehensive, dedicated support to assist National Institutes of Health researchers in providing the highest quality of clinical research in compliance with applicable regulations and guidelines, maintaining data integrity, and protecting human subjects. For the scientific advancement of clinical research, CMRP services include comprehensive clinical trials, regulatory, pharmacovigilance, protocol navigation and development, and programmatic and project management support for facilitating the conduct of 400+ Phase I, II, and III domestic and international trials on a yearly basis. These trials investigate the prevention, diagnosis, treatment of, and therapies for cancer, influenza, HIV, and other infectious diseases and viruses such as hepatitis C, tuberculosis, malaria, and Ebola virus; heart, lung, and

  14. Nuclear Criticality Experimental Research Center (NCERC) Overview

    Energy Technology Data Exchange (ETDEWEB)

    Goda, Joetta Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Grove, Travis Justin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hayes, David Kirk [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Myers, William L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sanchez, Rene Gerardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-03

    The mission of the National Criticality Experiments Research Center (NCERC) at the Device Assembly Facility (DAF) is to conduct experiments and training with critical assemblies and fissionable material at or near criticality in order to explore reactivity phenomena, and to operate the assemblies in the regions from subcritical through delayed critical. One critical assembly, Godiva-IV, is designed to operate above prompt critical. The Nuclear Criticality Experimental Research Center (NCERC) is our nation’s only general-purpose critical experiments facility and is only one of a few that remain operational throughout the world. This presentation discusses the history of NCERC, the general activities that makeup work at NCERC, and the various government programs and missions that NCERC supports. Recent activities at NCERC will be reviewed, with a focus on demonstrating how NCERC meets national security mission goals using engineering fundamentals. In particular, there will be a focus on engineering theory and design and applications of engineering fundamentals at NCERC. NCERC activities that relate to engineering education will also be examined.

  15. Waste management at the Karlsruhe Nuclear Research Center

    International Nuclear Information System (INIS)

    Hoehlein, G.; Lins, W.

    1982-01-01

    In the Karlsruhe Nuclear Research Center the responsibility for waste management is concentrated in the Decontamination Department which serves to collect and transport all liquid waste and solid material from central areas in the center for further waste treatment, clean radioactive equipment for repair and re-use or for recycling of material, remove from the liquid effluents any radioactive and chemical pollutants as specified in legislation on the protection of waters, convert radioactive wastes into mechanically and chemically stable forms allowing them to be transported into a repository. (orig./RW)

  16. Progress report of Cekmece Nuclear Research and Training Center for 1981

    International Nuclear Information System (INIS)

    1982-01-01

    Presented are the research works carried out in 1981 in Energy, Radiological Safety, Radioisotope, Application of Nuclear Techniques and Basic Research of Cekmece Nuclear Research and Training Center. (author)

  17. Large scale simulations of lattice QCD thermodynamics on Columbia Parallel Supercomputers

    International Nuclear Information System (INIS)

    Ohta, Shigemi

    1989-01-01

    The Columbia Parallel Supercomputer project aims at the construction of a parallel processing, multi-gigaflop computer optimized for numerical simulations of lattice QCD. The project has three stages; 16-node, 1/4GF machine completed in April 1985, 64-node, 1GF machine completed in August 1987, and 256-node, 16GF machine now under construction. The machines all share a common architecture; a two dimensional torus formed from a rectangular array of N 1 x N 2 independent and identical processors. A processor is capable of operating in a multi-instruction multi-data mode, except for periods of synchronous interprocessor communication with its four nearest neighbors. Here the thermodynamics simulations on the two working machines are reported. (orig./HSI)

  18. Research Centers & Consortia | College of Engineering & Applied Science

    Science.gov (United States)

    Academics Admission Student Life Research Schools & Colleges Libraries Athletics Centers & ; Applied Science Powerful Ideas. Proven Results. Search for: Go This site All UWM Search Site Menu Skip to content Academics Undergraduate Programs Majors Minors Integrated Bachelor/Master Degree Applied Computing

  19. Cancer Center Clinic and Research Team Perceptions of Identity and Interactions.

    Science.gov (United States)

    Reimer, Torsten; Lee, Simon J Craddock; Garcia, Sandra; Gill, Mary; Duncan, Tobi; Williams, Erin L; Gerber, David E

    2017-12-01

    Conduct of cancer clinical trials requires coordination and cooperation among research and clinic teams. Diffusion of and confusion about responsibility may occur if team members' perceptions of roles and objectives do not align. These factors are critical to the success of cancer centers but are poorly studied. We developed a survey adapting components of the Adapted Team Climate Inventory, Measure of Team Identification, and Measure of In-Group Bias. Surveys were administered to research and clinic staff at a National Cancer Institute-designated comprehensive cancer center. Data were analyzed using descriptive statistics, t tests, and analyses of variance. Responses were received from 105 staff (clinic, n = 55; research, n = 50; 61% response rate). Compared with clinic staff, research staff identified more strongly with their own group ( P teams, we also identified key differences, including perceptions of goal clarity and sharing, understanding and alignment with cancer center goals, and importance of outcomes. Future studies should examine how variation in perceptions and group dynamics between clinic and research teams may impact function and processes of cancer care.

  20. Tennessee Valley Authority National Fertilizer and Environmental Research Center

    International Nuclear Information System (INIS)

    Gautney, J.

    1991-01-01

    The National Fertilizer and Environmental Research Center (NFERC) is a unique part of the Tennessee Valley Authority (TVA), a government agency created by an Act of Congress in 1933. The Center, located in Muscle Shoals, Alabama, is a national laboratory for research, development, education and commercialization for fertilizers and related agricultural chemicals including their economic and environmentally safe use, renewable fuel and chemical technologies, alternatives for solving environmental/waste problems, and technologies which support national defense- NFERC projects in the pesticide waste minimization/treatment/disposal areas include ''Model Site Demonstrations and Site Assessments,'' ''Development of Waste Treatment and Site Remediation Technologies for Fertilizer/Agrichemical Dealers,'' ''Development of a Dealer Information/Education Program,'' and ''Constructed Wetlands.''

  1. Unique Methodologies for Nano/Micro Manufacturing Job Training Via Desktop Supercomputer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, Clyde [Northern Illinois Univ., DeKalb, IL (United States); Karonis, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Lurio, Laurence [Northern Illinois Univ., DeKalb, IL (United States); Piot, Philippe [Northern Illinois Univ., DeKalb, IL (United States); Xiao, Zhili [Northern Illinois Univ., DeKalb, IL (United States); Glatz, Andreas [Northern Illinois Univ., DeKalb, IL (United States); Pohlman, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Hou, Minmei [Northern Illinois Univ., DeKalb, IL (United States); Demir, Veysel [Northern Illinois Univ., DeKalb, IL (United States); Song, Jie [Northern Illinois Univ., DeKalb, IL (United States); Duffin, Kirk [Northern Illinois Univ., DeKalb, IL (United States); Johns, Mitrick [Northern Illinois Univ., DeKalb, IL (United States); Sims, Thomas [Northern Illinois Univ., DeKalb, IL (United States); Yin, Yanbin [Northern Illinois Univ., DeKalb, IL (United States)

    2012-11-21

    This project establishes an initiative in high speed (Teraflop)/large-memory desktop supercomputing for modeling and simulation of dynamic processes important for energy and industrial applications. It provides a training ground for employment of current students in an emerging field with skills necessary to access the large supercomputing systems now present at DOE laboratories. It also provides a foundation for NIU faculty to quantum leap beyond their current small cluster facilities. The funding extends faculty and student capability to a new level of analytic skills with concomitant publication avenues. The components of the Hewlett Packard computer obtained by the DOE funds create a hybrid combination of a Graphics Processing System (12 GPU/Teraflops) and a Beowulf CPU system (144 CPU), the first expandable via the NIU GAEA system to ~60 Teraflops integrated with a 720 CPU Beowulf system. The software is based on access to the NVIDIA/CUDA library and the ability through MATLAB multiple licenses to create additional local programs. A number of existing programs are being transferred to the CPU Beowulf Cluster. Since the expertise necessary to create the parallel processing applications has recently been obtained at NIU, this effort for software development is in an early stage. The educational program has been initiated via formal tutorials and classroom curricula designed for the coming year. Specifically, the cost focus was on hardware acquisitions and appointment of graduate students for a wide range of applications in engineering, physics and computer science.

  2. Overview of Dynamics Integration Research (DIR) program at Langley Research Center

    Science.gov (United States)

    Sliwa, Steven M.; Abel, Irving

    1989-01-01

    Research goals and objectives for an ongoing activity at Langley Research Center (LaRC) are described. The activity is aimed principally at dynamics optimization for aircraft. The effort involves active participation by the Flight Systems, Structures, and Electronics directorates at LaRC. The Functional Integration Technology (FIT) team has been pursuing related goals since 1985. A prime goal has been the integration and optimization of vehicle dynamics through collaboration at the basic principles or equation level. Some significant technical progress has been accomplished since then and is reflected here. An augmentation for this activity, Dynamics Integration Research (DIR), has been proposed to NASA Headquarters and is being considered for funding in FY 1990 or FY 1991.

  3. Data and Data Products for Climate Research: Web Services at the Asia-Pacific Data-Research Center (APDRC)

    Science.gov (United States)

    DeCarlo, S.; Potemra, J. T.; Wang, K.

    2012-12-01

    The International Pacific Research Center (IPRC) at the University of Hawaii maintains a data center for climate studies called the Asia-Pacific Data-Research Center (APDRC). This data center was designed within a center of excellence in climate research with the intention of serving the needs of the research scientist. The APDRC provides easy access to a wide collection of climate data and data products for a wide variety of users. The data center maintains an archive of approximately 100 data sets including in-situ and remote data, as well as a range of model-based output. All data are available via on-line browsing tools such as a Live Access Server (LAS) and DChart, and direct binary access is available through OPeNDAP services. On-line tutorials on how to use these services are now available. Users can keep up-to-date with new data and product announcements via the APDRC facebook page. The main focus of the APDRC has been climate scientists, and the services are therefore streamlined to such users, both in the number and types of data served, but also in the way data are served. In addition, due to the integration of the APDRC within the IPRC, several value-added data products (see figure for an example using Argo floats) have been developed via a variety of research activities. The APDRC, therefore, has three main foci: 1. acquisition of climate-related data, 2. maintenance of integrated data servers, and 3. development and distribution of data products The APDRC can be found at http://apdrc.soest.hawaii.edu. The presentation will provide an overview along with specific examples of the data, data products and data services available at the APDRC.; APDRC product example: gridded field from Argo profiling floats

  4. Assessment team report on flight-critical systems research at NASA Langley Research Center

    Science.gov (United States)

    Siewiorek, Daniel P. (Compiler); Dunham, Janet R. (Compiler)

    1989-01-01

    The quality, coverage, and distribution of effort of the flight-critical systems research program at NASA Langley Research Center was assessed. Within the scope of the Assessment Team's review, the research program was found to be very sound. All tasks under the current research program were at least partially addressing the industry needs. General recommendations made were to expand the program resources to provide additional coverage of high priority industry needs, including operations and maintenance, and to focus the program on an actual hardware and software system that is under development.

  5. Large space antenna communications systems: Integrated Langley Research Center/Jet Propulsion Laboratory development activities. 2: Langley Research Center activities

    Science.gov (United States)

    Cambell, T. G.; Bailey, M. C.; Cockrell, C. R.; Beck, F. B.

    1983-01-01

    The electromagnetic analysis activities at the Langley Research Center are resulting in efficient and accurate analytical methods for predicting both far- and near-field radiation characteristics of large offset multiple-beam multiple-aperture mesh reflector antennas. The utilization of aperture integration augmented with Geometrical Theory of Diffraction in analyzing the large reflector antenna system is emphasized.

  6. Cooperative research with CHECIR (CHErnobyl Center for International Research)

    International Nuclear Information System (INIS)

    Nagaoka, T.; Saito, K.; Sakamoto, R.; Tsutsumi, M.; Moriuchi, S.

    1994-01-01

    The Chernobyl Center for International Research (CHECIR) has been established under an agreement among IAEA. Russia, Byelorussia and Ukraine in order to implement various studies on the reactor facilities and on the environment near and around the reactor. JAERI started discussions with a view to join the idea on the research project of study on assessment and analysis of environmental consequences in contaminated area. On June, 1992, JAERI and CHECIR concluded an agreement on the Implementation of Research at the CHECIR. Under the agreement, JAERI has started 'Study on Assessment and Analysis of Environmental Radiological Consequences and Verification of an Assessment System'. This project is scheduled to last until 1996. This study consists of following two subjects. Subject-1: Study on Measurements and Evaluation of Environmental External Exposure after Nuclear Accident. Subject-2: Study on the Validation of Assessment Models in an Environmental Consequence Assessment Methodology for Nuclear Accidents. Subject-3: Study on Migration of Radionuclides Released into Rivers adjacent to the Chernobyl Nuclear Power Plant (planned to start from FY1994). In this workshop, research activity will be introduced with actually measured data. (J.P.N.)

  7. Ab initio molecular dynamics simulations for the role of hydrogen in catalytic reactions of furfural on Pd(111)

    Science.gov (United States)

    Xue, Wenhua; Dang, Hongli; Liu, Yingdi; Jentoft, Friederike; Resasco, Daniel; Wang, Sanwu

    2014-03-01

    In the study of catalytic reactions of biomass, furfural conversion over metal catalysts with the presence of hydrogen has attracted wide attention. We report ab initio molecular dynamics simulations for furfural and hydrogen on the Pd(111) surface at finite temperatures. The simulations demonstrate that the presence of hydrogen is important in promoting furfural conversion. In particular, hydrogen molecules dissociate rapidly on the Pd(111) surface. As a result of such dissociation, atomic hydrogen participates in the reactions with furfural. The simulations also provide detailed information about the possible reactions of hydrogen with furfural. Supported by DOE (DE-SC0004600). This research used the supercomputer resources of the XSEDE, the NERSC Center, and the Tandy Supercomputing Center.

  8. The mobilize center: an NIH big data to knowledge center to advance human movement research and improve mobility.

    Science.gov (United States)

    Ku, Joy P; Hicks, Jennifer L; Hastie, Trevor; Leskovec, Jure; Ré, Christopher; Delp, Scott L

    2015-11-01

    Regular physical activity helps prevent heart disease, stroke, diabetes, and other chronic diseases, yet a broad range of conditions impair mobility at great personal and societal cost. Vast amounts of data characterizing human movement are available from research labs, clinics, and millions of smartphones and wearable sensors, but integration and analysis of this large quantity of mobility data are extremely challenging. The authors have established the Mobilize Center (http://mobilize.stanford.edu) to harness these data to improve human mobility and help lay the foundation for using data science methods in biomedicine. The Center is organized around 4 data science research cores: biomechanical modeling, statistical learning, behavioral and social modeling, and integrative modeling. Important biomedical applications, such as osteoarthritis and weight management, will focus the development of new data science methods. By developing these new approaches, sharing data and validated software tools, and training thousands of researchers, the Mobilize Center will transform human movement research. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  9. San Joaquin Valley Aerosol Health Effects Research Center (SAHERC)

    Data.gov (United States)

    Federal Laboratory Consortium — At the San Joaquin Valley Aerosol Health Effects Center, located at the University of California-Davis, researchers will investigate the properties of particles that...

  10. Meharry-Johns Hopkins Center for Prostate Cancer Research

    Science.gov (United States)

    2015-11-01

    formerly at the Institute for Health, Social, and Community Research (IHSCR) Center for Survey Research ( CSR ) at Shaw University in Raleigh, NC...survey will be conducted at CSR which is now located at Johns Hopkins Bloomberg School of Public Health (JHBSPH) located in Raleigh, NC. The Sons...the strategy to contact sons for whom she had no address or phone number. It was hoped that the father will notify the son to contact the study

  11. Double Star Research: A Student-Centered Community of Practice

    Science.gov (United States)

    Johnson, Jolyon

    2016-06-01

    Project and team-based pedagogies are increasingly augmenting lecture-style science classrooms. Occasionally, university professors will invite students to tangentially partcipate in their research. Since 2006, Dr. Russ Genet has led an astronomy research seminar for community college and high school students that allows participants to work closely with a melange of professional and advanced amatuer researchers. The vast majority of topics have centered on measuring the position angles and searations of double stars which can be readily published in the Journal of Double Star Observations. In the intervening years, a collaborative community of practice (Wenger, 1998) formed with the students as lead researchers on their projects with the guidance of experienced astronomers and educators. The students who join the research seminar are often well prepared for further STEM education in college and career. Today, the research seminar involves multile schools in multiple states with a volunteer educator acting as an assistant instructor at each location. These assistant instructors interface with remote observatories, ensure progress is made, and recruit students. The key deliverables from each student team include a published research paper and a public presentation online or in-person. Citing a published paper on scholarship and college applications gives students' educational carreers a boost. Recently the Journal of Double Star Observations published its first special issue of exlusively student-centered research.

  12. SNU-KAERI Degree and Research Center for Radiation Convergence Sciences

    International Nuclear Information System (INIS)

    Jo, Sungkee; Kim, S. U.; Roh, C. H

    2011-12-01

    In this study, we tried to establish and perform the demonstrative operation of the 'Degree and Research Center for Radiation Convergence Sciences' to raise the Korea's technology competitiveness. As results of this project we got the successful accomplishment as below: 1. Operation of Degree and Research Center for Radiation Convergence Sciences and establishment of expert researcher training system Ο Presentation of an efficient model for expert researcher training program through the operation of university-institute collaboration courses by combining of Graduate course and DRC system. Ο Radiation Convergence Sciences major is scheduled to be established in 2013 at SNU Graduate School of Convergence Science and Technology Ο A big project for research, education, and training of radiation convergence science is under planning 2. Establishment and conduction of joint research by organization of radiation convergence research consortium · Joint research was conducted in close connection with the research projects of researchers participating in this DRC project (44 articles published in journals, 6 patents applied, 88 papers presented in conferences) · The resources of the two organization (SNU and KAERI), such as research infrastructure (hightech equipment and etc), manpower (professor/researcher), and original technology and know how were utilized to conduct the joint research and to establish the collaboration system of the two organizations

  13. Real World Uses For Nagios APIs

    Science.gov (United States)

    Singh, Janice

    2014-01-01

    This presentation describes the Nagios 4 APIs and how the NASA Advanced Supercomputing at Ames Research Center is employing them to upgrade its graphical status display (the HUD) and explain why it's worth trying to use them yourselves.

  14. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone; Manzano Franco, Joseph B.

    2012-12-31

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, we introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.

  15. Patient Care Coordinator | Center for Cancer Research

    Science.gov (United States)

    PROGRAM DESCRIPTION Within the Leidos Biomedical Research Inc.’s Clinical Research Directorate, the Clinical Monitoring Research Program (CMRP) provides high-quality comprehensive and strategic operational support to the high-profile domestic and international clinical research initiatives of the National Cancer Institute (NCI), National Institute of Allergy and Infectious Diseases (NIAID), Clinical Center (CC), National Institute of Heart, Lung and Blood Institute (NHLBI), National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), National Center for Advancing Translational Sciences (NCATS), National Institute of Neurological Disorders and Stroke (NINDS), and the National Institute of Mental Health (NIMH). Since its inception in 2001, CMRP’s ability to provide rapid responses, high-quality solutions, and to recruit and retain experts with a variety of backgrounds to meet the growing research portfolios of NCI, NIAID, CC, NHLBI, NIAMS, NCATS, NINDS, and NIMH has led to the considerable expansion of the program and its repertoire of support services. CMRP’s support services are strategically aligned with the program’s mission to provide comprehensive, dedicated support to assist National Institutes of Health researchers in providing the highest quality of clinical research in compliance with applicable regulations and guidelines, maintaining data integrity, and protecting human subjects. For the scientific advancement of clinical research, CMRP services include comprehensive clinical trials, regulatory, pharmacovigilance, protocol navigation and development, and programmatic and project management support for facilitating the conduct of 400+ Phase I, II, and III domestic and international trials on a yearly basis. These trials investigate the prevention, diagnosis, treatment of, and therapies for cancer, influenza, HIV, and other infectious diseases and viruses such as hepatitis C, tuberculosis, malaria, and Ebola virus; heart, lung, and

  16. SERS internship fall 1995 abstracts and research papers

    Energy Technology Data Exchange (ETDEWEB)

    Davis, Beverly

    1996-05-01

    This report is a compilation of twenty abstracts and their corresponding full papers of research projects done under the US Department of Energy Science and Engineering Research Semester (SERS) program. Papers cover a broad range of topics, for example, environmental transport, supercomputers, databases, biology. Selected papers were indexed separately for inclusion the the Energy Science and Technology Database.

  17. Center for modeling of turbulence and transition: Research briefs, 1995

    Science.gov (United States)

    1995-10-01

    This research brief contains the progress reports of the research staff of the Center for Modeling of Turbulence and Transition (CMOTT) from July 1993 to July 1995. It also constitutes a progress report to the Institute of Computational Mechanics in Propulsion located at the Ohio Aerospace Institute and the Lewis Research Center. CMOTT has been in existence for about four years. In the first three years, its main activities were to develop and validate turbulence and combustion models for propulsion systems, in an effort to remove the deficiencies of existing models. Three workshops on computational turbulence modeling were held at LeRC (1991, 1993, 1994). At present, CMOTT is integrating the CMOTT developed/improved models into CFD tools which can be used by the propulsion systems community. This activity has resulted in an increased collaboration with the Lewis CFD researchers.

  18. Alpha waste management at the Valduc Research Center

    International Nuclear Information System (INIS)

    Jouan, A.; Cartier, R.; Durec, J.P.; Flament, T.

    1995-01-01

    Operation of the reprocessing facilities at the Valduc Research Center of the French Atomic Energy Commission (CEA) generates waste with a variety of characteristics. The waste compatible with surface storage requirements is transferred to the French Radioactive Waste Management Agency (ANDRA); rest is reprocessed under a program which enables storage in compliance with the requirements of permits issued by safety Authorities. The waste reprocessing program provides for the construction of an incinerator capable of handling nearly all of the combustible waste generated by the Center and vitrification facility for treating liquid waste generated by the plutonium handling plant. (authors)

  19. New York can be our nation's center for Alzheimer's research.

    Science.gov (United States)

    Vann, Allan S

    2014-09-01

    More than 5 million people in this country have Alzheimer's disease, and more than 300,000 of those with Alzheimer's live in New York. By 2025, it is estimated that there will be 350,000 residents living with Alzheimer's in New York. Congressman Steve Israel and New York Assemblyman Charles Lavine issued a joint proposal in June, 2013 suggesting that New York become this country's center for Alzheimer's research. Obviously, they would both like to see increased federal funding, but they also know that we cannot count on that happening. So Israel and Lavine have proposed a $3 billion state bonding initiative to secure sufficient funding to tackle this disease. It would be similar to the bonding initiatives that have made California and Texas this nation's centers for stem cell and cancer research. The bond would provide a dedicated funding stream to support research to find effective means to treat, cure, and eventually prevent Alzheimer's, and fund programs to help people currently dealing with Alzheimer's and their caregivers. New York already has some of the major "ingredients" to make an Alzheimer's bond initiative a success, including 3 of our nation's 29 Alzheimer's Disease Research Centers and some of the finest research facilities in the nation for genetic and neuroscience research. One can only imagine the synergy of having these world class institutions working on cooperative grants and projects with sufficient funding to attract even more world class researchers and scientists to New York to find ways to prevent, treat, and cure Alzheimer's. © The Author(s) 2014.

  20. Aircraft Engine Noise Research and Testing at the NASA Glenn Research Center

    Science.gov (United States)

    Elliott, Dave

    2015-01-01

    The presentation will begin with a brief introduction to the NASA Glenn Research Center as well as an overview of how aircraft engine noise research fits within the organization. Some of the NASA programs and projects with noise content will be covered along with the associated goals of aircraft noise reduction. Topics covered within the noise research being presented will include noise prediction versus experimental results, along with engine fan, jet, and core noise. Details of the acoustic research conducted at NASA Glenn will include the test facilities available, recent test hardware, and data acquisition and analysis methods. Lastly some of the actual noise reduction methods investigated along with their results will be shown.

  1. Karlsruhe Nuclear Research Center, Central Safety Department. Annual report 1993

    International Nuclear Information System (INIS)

    Koelzer, W.

    1994-04-01

    The Central Safety Department is responsible for handling all tasks of radiation protection, safety and security of the institutes and departments of the Karlsruhe Nuclear Research Center, for waste water activity measurements and environmental monitoring of the whole area of the Center, and for research and development work mainly focusing on nuclear safety and radiation protection measures. The research and development work concentrates on the following aspects: behavior of trace elements in the environment and decontamination of soil, behavior of tritium in the air/soil-plant system, improvement in radiation protection measurements and personnel dosimetry. This report gives details of the different duties, indicates the results of 1993 routine tasks and reports about results of investigations and developments of the working groups of the Department. (orig.) [de

  2. Multi-Vehicle Cooperative Control Research at the NASA Armstrong Flight Research Center, 2000-2014

    Science.gov (United States)

    Hanson, Curt

    2014-01-01

    A brief introductory overview of multi-vehicle cooperative control research conducted at the NASA Armstrong Flight Research Center from 2000 - 2014. Both flight research projects and paper studies are included. Since 2000, AFRC has been almost continuously pursuing research in the areas of formation flight for drag reduction and automated cooperative trajectories. An overview of results is given, including flight experiments done on the FA-18 and with the C-17. Other multi-vehicle cooperative research is discussed, including small UAV swarming projects and automated aerial refueling.

  3. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  4. The Creation and Role of the USDA Biomass Research Centers

    Science.gov (United States)

    William F. Anderson; Jeffery Steiner; Randy Raper; Ken Vogel; Terry Coffelt; Brenton Sharratt; Bob Rummer; Robert L. Deal; Alan Rudie

    2011-01-01

    The Five USDA Biomass Research Centers were created to facilitate coordinated research to enhance the establishment of a sustainable feedstock production for bio-based renewable energy in the United States. Scientists and staff of the Agricultural Research Service (ARS) and Forest Service (FS) within USDA collaborate with other federal agencies, universities and...

  5. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  6. Nuclear safety research collaborations between the US and Russian Federation international nuclear safety centers

    International Nuclear Information System (INIS)

    Hill, D.J; Braun, J.C; Klickman, A.E.; Bugaenko, S.E; Kabanov, L.P; Kraev, A.G.

    2000-01-01

    The Russian Federation Ministry for Atomic Energy (MINATOM) and the U.S. Department of Energy (USDOE) have formed International Nuclear Safety Centers to collaborate on nuclear safety research. USDOE established the U. S. Center at Argonne National Laboratory in October 1995. MINATOM established the Russian Center at the Research and Development Institute of Power Engineering in Moscow in July 1996. In April 1998 the Russian center became an independent, autonomous organization under MINATOM. The goals of the centers are to: cooperate in the development of technologies associated with nuclear safety in nuclear power engineering. be international centers for the collection of information important for safety and technical improvements in nuclear power engineering. maintain a base for fundamental knowledge needed to design nuclear reactors.The strategic approach that is being used to accomplish these goals is for the two centers to work together to use the resources and the talents of the scientists associated with the US Center and the Russian Center to do collaborative research to improve the safety of Russian-designed nuclear reactors

  7. The Austrian Research Centers activities in energy risks

    International Nuclear Information System (INIS)

    Sdouz, Gert

    1998-01-01

    Among the institutions involved in energy analyses in Austria the risk context is being treated by three different entities: the Energy Consumption Agency, internationally known as EVA, the Federal Environmental Protection Agency, or Urnweltbundesarnt assessing mainly the environmental risks involved and the Austrian Research Centers, working on safety and risk evaluation. The Austrian Research Center Seibersdorf draws form its proficiency in Reactor Safety and Fusion Research, two fields of experience it has been involved in since its foundation, for some 40 years now. Nuclear energy is not well accepted by the Austrian population. Therefore in our country only energy systems with advanced safety level might be accepted in the far future. This means that the development of methods to compare risks is an important task. The characteristics of energy systems featuring advanced safety levels are: A very low hazard potential and a focus on deterministic safety instead of probabilistic safety, meaning to rely on inherently safe physics concepts, confirmed by probabilistic safety evaluation results. This can be achieved by adequate design of fusion reactors, advanced fission reactors and all different renewable sources of energy

  8. Program budget 1992 of the Karlsruhe Nuclear Research Center. As of November 19, 1991

    International Nuclear Information System (INIS)

    1992-01-01

    In the future, the research program of the Nuclear Research Center in Karlsruhe will concentrate on three areas, which are of the same status over the medium term: Environmental research, energy research and micro system technology and fundamental research. The central infrastructure, the financial planning and the assignment of research and development projects of the Nuclear Research Center are presented in tables. (orig./HP) [de

  9. An Optimized Parallel FDTD Topology for Challenging Electromagnetic Simulations on Supercomputers

    Directory of Open Access Journals (Sweden)

    Shugang Jiang

    2015-01-01

    Full Text Available It may not be a challenge to run a Finite-Difference Time-Domain (FDTD code for electromagnetic simulations on a supercomputer with more than 10 thousands of CPU cores; however, to make FDTD code work with the highest efficiency is a challenge. In this paper, the performance of parallel FDTD is optimized through MPI (message passing interface virtual topology, based on which a communication model is established. The general rules of optimal topology are presented according to the model. The performance of the method is tested and analyzed on three high performance computing platforms with different architectures in China. Simulations including an airplane with a 700-wavelength wingspan, and a complex microstrip antenna array with nearly 2000 elements are performed very efficiently using a maximum of 10240 CPU cores.

  10. The NIH-NIAID Filariasis Research Reagent Resource Center.

    Directory of Open Access Journals (Sweden)

    Michelle L Michalski

    2011-11-01

    Full Text Available Filarial worms cause a variety of tropical diseases in humans; however, they are difficult to study because they have complex life cycles that require arthropod intermediate hosts and mammalian definitive hosts. Research efforts in industrialized countries are further complicated by the fact that some filarial nematodes that cause disease in humans are restricted in host specificity to humans alone. This potentially makes the commitment to research difficult, expensive, and restrictive. Over 40 years ago, the United States National Institutes of Health-National Institute of Allergy and Infectious Diseases (NIH-NIAID established a resource from which investigators could obtain various filarial parasite species and life cycle stages without having to expend the effort and funds necessary to maintain the entire life cycles in their own laboratories. This centralized resource (The Filariasis Research Reagent Resource Center, or FR3 translated into cost savings to both NIH-NIAID and to principal investigators by freeing up personnel costs on grants and allowing investigators to divert more funds to targeted research goals. Many investigators, especially those new to the field of tropical medicine, are unaware of the scope of materials and support provided by the FR3. This review is intended to provide a short history of the contract, brief descriptions of the fiilarial species and molecular resources provided, and an estimate of the impact the resource has had on the research community, and describes some new additions and potential benefits the resource center might have for the ever-changing research interests of investigators.

  11. Applied analytical combustion/emissions research at the NASA Lewis Research Center

    Science.gov (United States)

    Deur, J. M.; Kundu, K. P.; Nguyen, H. L.

    1992-01-01

    Emissions of pollutants from future commercial transports are a significant concern. As a result, the Lewis Research Center (LeRC) is investigating various low emission combustor technologies. As part of this effort, a combustor analysis code development program was pursued to guide the combustor design process, to identify concepts having the greatest promise, and to optimize them at the lowest cost in the minimum time.

  12. Reducing Losses from Wind-Related Natural Perils: Research at the IBHS Research Center

    OpenAIRE

    Standohar-Alfano, Christine D.; Estes, Heather; Johnston, Tim; Morrison, Murray J.; Brown-Giammanco, Tanya M.

    2017-01-01

    The capabilities of the Insurance Institute for Business & Home Safety (IBHS) Research Center full-scale test chamber are described in detail. This research facility allows complete full-scale structures to be tested. Testing at full-scale allows vulnerabilities of structures to be evaluated with fewer assumptions than was previously possible. Testing buildings under realistic elevated wind speeds has the potential to isolate important factors that influence the performance of components, pot...

  13. 34 CFR 350.31 - What collaboration must a Rehabilitation Engineering Research Center engage in?

    Science.gov (United States)

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false What collaboration must a Rehabilitation Engineering... DISABILITY AND REHABILITATION RESEARCH PROJECTS AND CENTERS PROGRAM What Rehabilitation Engineering Research Centers Does the Secretary Assist? § 350.31 What collaboration must a Rehabilitation Engineering Research...

  14. The Begun Center for Violence Prevention Research and Education at Case Western Reserve University

    Science.gov (United States)

    Flannery, Daniel J.; Singer, Mark I.

    2015-01-01

    Established in the year 2000, the Begun Center for Violence Prevention Research and Education is a multidisciplinary center located at a school of social work that engages in collaborative, community-based research and evaluation that spans multiple systems and disciplines. The Center currently occupies 4,200 sq. ft. with multiple offices and…

  15. Training and technical assistance to enhance capacity building between prevention research centers and their partners.

    Science.gov (United States)

    Spadaro, Antonia J; Grunbaum, Jo Anne; Dawkins, Nicola U; Wright, Demia S; Rubel, Stephanie K; Green, Diane C; Simoes, Eduardo J

    2011-05-01

    The Centers for Disease Control and Prevention has administered the Prevention Research Centers Program since 1986. We quantified the number and reach of training programs across all centers, determined whether the centers' outcomes varied by characteristics of the academic institution, and explored potential benefits of training and technical assistance for academic researchers and community partners. We characterized how these activities enhanced capacity building within Prevention Research Centers and the community. The program office collected quantitative information on training across all 33 centers via its Internet-based system from April through December 2007. Qualitative data were collected from April through May 2007. We selected 9 centers each for 2 separate, semistructured, telephone interviews, 1 on training and 1 on technical assistance. Across 24 centers, 4,777 people were trained in 99 training programs in fiscal year 2007 (October 1, 2006-September 30, 2007). Nearly 30% of people trained were community members or agency representatives. Training and technical assistance activities provided opportunities to enhance community partners' capacity in areas such as conducting needs assessments and writing grants and to improve the centers' capacity for cultural competency. Both qualitative and quantitative data demonstrated that training and technical assistance activities can foster capacity building and provide a reciprocal venue to support researchers' and the community's research interests. Future evaluation could assess community and public health partners' perception of centers' training programs and technical assistance.

  16. 34 CFR 350.33 - What cooperation requirements must a Rehabilitation Engineering Research Center meet?

    Science.gov (United States)

    2010-07-01

    ... Rehabilitation Engineering Research Center meet? A Rehabilitation Engineering Research Center— (a) Shall... 34 Education 2 2010-07-01 2010-07-01 false What cooperation requirements must a Rehabilitation Engineering Research Center meet? 350.33 Section 350.33 Education Regulations of the Offices of the Department...

  17. Climate@Home: Crowdsourcing Climate Change Research

    Science.gov (United States)

    Xu, C.; Yang, C.; Li, J.; Sun, M.; Bambacus, M.

    2011-12-01

    Climate change deeply impacts human wellbeing. Significant amounts of resources have been invested in building super-computers that are capable of running advanced climate models, which help scientists understand climate change mechanisms, and predict its trend. Although climate change influences all human beings, the general public is largely excluded from the research. On the other hand, scientists are eagerly seeking communication mediums for effectively enlightening the public on climate change and its consequences. The Climate@Home project is devoted to connect the two ends with an innovative solution: crowdsourcing climate computing to the general public by harvesting volunteered computing resources from the participants. A distributed web-based computing platform will be built to support climate computing, and the general public can 'plug-in' their personal computers to participate in the research. People contribute the spare computing power of their computers to run a computer model, which is used by scientists to predict climate change. Traditionally, only super-computers could handle such a large computing processing load. By orchestrating massive amounts of personal computers to perform atomized data processing tasks, investments on new super-computers, energy consumed by super-computers, and carbon release from super-computers are reduced. Meanwhile, the platform forms a social network of climate researchers and the general public, which may be leveraged to raise climate awareness among the participants. A portal is to be built as the gateway to the climate@home project. Three types of roles and the corresponding functionalities are designed and supported. The end users include the citizen participants, climate scientists, and project managers. Citizen participants connect their computing resources to the platform by downloading and installing a computing engine on their personal computers. Computer climate models are defined at the server side. Climate

  18. Development of a high performance eigensolver on the peta-scale next generation supercomputer system

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Yamada, Susumu; Machida, Masahiko

    2010-01-01

    For the present supercomputer systems, a multicore and multisocket processors are necessary to build a system, and choice of interconnection is essential. In addition, for effective development of a new code, high performance, scalable, and reliable numerical software is one of the key items. ScaLAPACK and PETSc are well-known software on distributed memory parallel computer systems. It is needless to say that highly tuned software towards new architecture like many-core processors must be chosen for real computation. In this study, we present a high-performance and high-scalable eigenvalue solver towards the next-generation supercomputer system, so called 'K-computer' system. We have developed two versions, the standard version (eigen s) and enhanced performance version (eigen sx), which are developed on the T2K cluster system housed at University of Tokyo. Eigen s employs the conventional algorithms; Householder tridiagonalization, divide and conquer (DC) algorithm, and Householder back-transformation. They are carefully implemented with blocking technique and flexible two-dimensional data-distribution to reduce the overhead of memory traffic and data transfer, respectively. Eigen s performs excellently on the T2K system with 4096 cores (theoretical peak is 37.6 TFLOPS), and it shows fine performance 3.0 TFLOPS with a two hundred thousand dimensional matrix. The enhanced version, eigen sx, uses more advanced algorithms; the narrow-band reduction algorithm, DC for band matrices, and the block Householder back-transformation with WY-representation. Even though this version is still on a test stage, it shows 4.7 TFLOPS with the same dimensional matrix on eigen s. (author)

  19. Successes of Small Business Innovation Research at NASA Glenn Research Center

    Science.gov (United States)

    Kim, Walter S.; Bitler, Dean W.; Prok, George M.; Metzger, Marie E.; Dreibelbis, Cindy L.; Ganss, Meghan

    2002-01-01

    This booklet of success stories highlights the NASA Glenn Research Center's accomplishments and successes by the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) Programs. These success stories are the results of selecting projects that support NASA missions and also have high commercialization potential. Each success story describes the innovation accomplished, commercialization of the technology, and further applications and usages. This booklet emphasizes the integration and incorporation of technologies into NASA missions and other government projects. The company name and the NASA contact person are identified to encourage further usage and application of the SBIR developed technologies and also to promote further commercialization of these products.

  20. Overview of Stirling Technology Research at NASA Glenn Research Center

    Science.gov (United States)

    Wilson, Scott D.; Schifer, Nicholas A.; Williams, Zachary D.; Metscher, Jonathan F.

    2016-01-01

    Stirling Radioisotope Power Systems (RPSs) are under development to provide power on future space science missions where robotic spacecraft will orbit, fly by, land, or rove using less than a quarter of the plutonium the currently available RPS uses to produce about the same power. NASA Glenn Research Center's newly formulated Stirling Cycle Technology Development Project (SCTDP) continues development of Stirling-based systems and subsystems, which include a flight-like generator and related housing assembly, controller, and convertors. The project also develops less mature technologies under Stirling Technology Research, with a focus on demonstration in representative environments to increase the technology readiness level (TRL). Matured technologies are evaluated for selection in future generator designs. Stirling Technology Research tasks focus on a wide variety of objectives, including increasing temperature capability to enable new environments, reducing generator mass and/or size, improving reliability and system fault tolerance, and developing alternative designs. The task objectives and status are summarized.

  1. Nuclear research center looks for 4000 pressure-cookers

    International Nuclear Information System (INIS)

    Anon.

    2013-01-01

    The CEA/Valduc research center has recently made a strange bid for the purchase of 4000 stainless steel pressure-cookers. In fact pressure-cookers are economical containers perfectly fitted for keeping radioactive materials. About 10.000 pressure-cookers have been bought in the last 50 years by CEA/Valduc. (A.C.)

  2. Demonstration-informative center based on research reactor IR-50 in heat regime

    International Nuclear Information System (INIS)

    Krupenina, Ph.

    2000-01-01

    Many problems exist in the nuclear field, but the most significant one is the public's mistrust of Nuclear Energy. Strong downfalls of the radiological culture affect public perception, the main paradox being the situation after Chernobyl. The task of creating a Demonstration-Informative Center (Minatom RF) on reactor IR-50 research is conducted by Research and Development Institute of Power Engineering (ENTEK). The IR-50 is situated on the grounds of the institute. It will be a unique event when the functional reactor is situated in the center of the city. The purposes of the Demonstration-Informative Center are discussed. (authors)

  3. Bibliometric analysis of poison center-related research published in peer-review journals.

    Science.gov (United States)

    Forrester, M B

    2016-07-01

    Poison centers advance knowledge in the field of toxicology through publication in peer-review journals. This investigation describes the pattern of poison center-related publications. Cases were poison center-related research published in peer-review journals during 1995-2014. These were identified through searching the PubMed database, reviewing the tables of contents of selected toxicology journals, and reviewing abstracts of various national and international meetings. The following variables for each publication were identified: year of publication, journal, type of publication (meeting abstract vs. other, i.e. full article or letter to the editor), and the country(ies) of the poison center(s) included in the research. Of the 3147 total publications, 62.1% were meeting abstracts. There were 263 publications in 1995-1999, 536 in 2000-2004, 999 in 2005-2009, and 1349 in 2010-2014. The publications were in 234 different journals. The journals in which the highest number of research was published were Clinical Toxicology (69.7%), Journal of Medical Toxicology (2.2%), and Veterinary and Human Toxicology (2.1%). The research was reported from 62 different countries. The countries with the highest number of publications were the United States (67.9%), United Kingdom (6.5%), Germany (3.9%), France (2.5%), and Italy (2.4%). The number of publications increased greatly over the 20 years. Although the publications were in a large number of journals, a high proportion of the publications were in one journal. While the research came from a large number of countries, the preponderance came from the United States. © The Author(s) 2015.

  4. A Survey of Knowledge Management Research & Development at NASA Ames Research Center

    Science.gov (United States)

    Keller, Richard M.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This chapter catalogs knowledge management research and development activities at NASA Ames Research Center as of April 2002. A general categorization scheme for knowledge management systems is first introduced. This categorization scheme divides knowledge management capabilities into five broad categories: knowledge capture, knowledge preservation, knowledge augmentation, knowledge dissemination, and knowledge infrastructure. Each of nearly 30 knowledge management systems developed at Ames is then classified according to this system. Finally, a capsule description of each system is presented along with information on deployment status, funding sources, contact information, and both published and internet-based references.

  5. Watson will see you now: a supercomputer to help clinicians make informed treatment decisions.

    Science.gov (United States)

    Doyle-Lindrud, Susan

    2015-02-01

    IBM has collaborated with several cancer care providers to develop and train the IBM supercomputer Watson to help clinicians make informed treatment decisions. When a patient is seen in clinic, the oncologist can input all of the clinical information into the computer system. Watson will then review all of the data and recommend treatment options based on the latest evidence and guidelines. Once the oncologist makes the treatment decision, this information can be sent directly to the insurance company for approval. Watson has the ability to standardize care and accelerate the approval process, a benefit to the healthcare provider and the patient.

  6. Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers

    Science.gov (United States)

    Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi

    2018-03-01

    Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.

  7. Continuing training program in radiation protection in biological research centers

    International Nuclear Information System (INIS)

    Escudero, R.; Hidalgo, R.M.; Usera, F.; Macias, M.T.; Mirpuri, E.; Perez, J.; Sanchez, A.

    2008-01-01

    The use of ionizing radiation in biological research has many specific characteristics. A great variety of radioisotopic techniques involve unsealed radioactive sources, and their use not only carries a risk of irradiation, but also a significant risk of contamination. Moreover, a high proportion of researchers are in training and the labor mobility rate is therefore high. Furthermore, most newly incorporated personnel have little or no previous training in radiological protection, since most academic qualifications do not include training in this discipline. In a biological research center, in addition to personnel whose work is directly associated with the radioactive facility (scientific-technical personnel, operators, supervisors), there are also groups of support personnel The use of ionizing radiation in biological research has many specific characteristics. A great variety of radioisotopic techniques involve unsealed radioactive sources, and their use not only carries a risk of irradiation, but also a significant risk of contamination. Moreover, a high proportion of researchers are in training and the labor mobility rate is therefore high. Furthermore, most newly incorporated personnel have little or no previous training in radiological protection, since most academic qualifications do not include training in this discipline. In a biological research center, in addition to personnel whose work is directly associated with the radioactive facility (scientific-technical personnel, operators, supervisors), there are also groups of support personnel maintenance and instrumentation workers, cleaners, administrative personnel, etc. who are associated with the radioactive facility indirectly. These workers are affected by the work in the radioactive facility to varying degrees, and they therefore also require information and training in radiological protection tailored to their level of interaction with the installation. The aim of this study was to design a

  8. Center of Excellence for Geospatial Information Science research plan 2013-18

    Science.gov (United States)

    Usery, E. Lynn

    2013-01-01

    The U.S. Geological Survey Center of Excellence for Geospatial Information Science (CEGIS) was created in 2006 and since that time has provided research primarily in support of The National Map. The presentations and publications of the CEGIS researchers document the research accomplishments that include advances in electronic topographic map design, generalization, data integration, map projections, sea level rise modeling, geospatial semantics, ontology, user-centered design, volunteer geographic information, and parallel and grid computing for geospatial data from The National Map. A research plan spanning 2013–18 has been developed extending the accomplishments of the CEGIS researchers and documenting new research areas that are anticipated to support The National Map of the future. In addition to extending the 2006–12 research areas, the CEGIS research plan for 2013–18 includes new research areas in data models, geospatial semantics, high-performance computing, volunteered geographic information, crowdsourcing, social media, data integration, and multiscale representations to support the Three-Dimensional Elevation Program (3DEP) and The National Map of the future of the U.S. Geological Survey.

  9. Implementing multidisciplinary research center infrastructure - A trendsetting example: SUNUM

    OpenAIRE

    Birkan, Burak; Özgüz, Volkan Hüsnü; Ozguz, Volkan Husnu

    2014-01-01

    Sabanci University Nanotechnology Research and Application Center (SUNUM) became operational in January 2012. SUNUM is a trendsetting example of a green and flexible research facility that is a test bed for the cost-effective operation of a Centralized Demand-Controlled Ventilation (CDCV) system, a state-of-the-art cleanroom, and world-class high technology equipment. The total investment in the facility was US$35 million.

  10. Nuclear safety research collaborations between the U.S. and Russian Federation International Nuclear Safety Centers

    International Nuclear Information System (INIS)

    Hill, D. J.; Braun, J. C.; Klickman, A. E.; Bougaenko, S. E.; Kabonov, L. P.; Kraev, A. G.

    2000-01-01

    The Russian Federation Ministry for Atomic Energy (MINATOM) and the US Department of Energy (USDOE) have formed International Nuclear Safety Centers to collaborate on nuclear safety research. USDOE established the US Center (ISINSC) at Argonne National Laboratory (ANL) in October 1995. MINATOM established the Russian Center (RINSC) at the Research and Development Institute of Power Engineering (RDIPE) in Moscow in July 1996. In April 1998 the Russian center became a semi-independent, autonomous organization under MINATOM. The goals of the center are to: Cooperate in the development of technologies associated with nuclear safety in nuclear power engineering; Be international centers for the collection of information important for safety and technical improvements in nuclear power engineering; and Maintain a base for fundamental knowledge needed to design nuclear reactors. The strategic approach is being used to accomplish these goals is for the two centers to work together to use the resources and the talents of the scientists associated with the US Center and the Russian Center to do collaborative research to improve the safety of Russian-designed nuclear reactors. The two centers started conducting joint research and development projects in January 1997. Since that time the following ten joint projects have been initiated: INSC databases--web server and computing center; Coupled codes--Neutronic and thermal-hydraulic; Severe accident management for Soviet-designed reactors; Transient management and advanced control; Survey of relevant nuclear safety research facilities in the Russian Federation; Computer code validation for transient analysis of VVER and RBMK reactors; Advanced structural analysis; Development of a nuclear safety research and development plan for MINATOM; Properties and applications of heavy liquid metal coolants; and Material properties measurement and assessment. Currently, there is activity in eight of these projects. Details on each of these

  11. NDE research at NASA Langley Research Center

    International Nuclear Information System (INIS)

    Heyman, J.S.

    1989-01-01

    The Nondestructive Measurement Science Branch at NASA Langley is the Agency's lead Center for NDE research. The focus of the laboratory is to improve the science base for NDE, evolve a more quantitative, interpretable technology to insure safety and reliability, and transfer that technology to the commercial sector. To address the broad needs of the Agency, the program has developed expertise in many areas, some of which are in ultrasonics, nonlinear acoustics, nano and microstructure characterization, thermal NDE, x-ray tomography, optical fiber sensors, magnetic probing, process monitoring sensors, and image/signal processing. The authors laboratory has recently dedicated its new 20,000 square foot research facility bringing the lab space to 30,000 square feet. The new facility includes a high bay for the x-ray CAT scanner, a revolutionary new concept in materials measurement. The CAT scanner is called QUEST, for quantitative experimental stress tomography lab. This system combines for the first time a microfocus x-ray source and detector with a fatigue load frame. Three dimensional imaging of density/geometry of the tested sample is thus possible during tension/compression loading. This system provides the first 3-D view of crack initiation, crack growth, phase transformation, bonded surface failure, creep-all with a density sensitivity of 0.1% and a resolution of about 25 microns (detectability of about 1 micron)

  12. Armstrong Flight Research Center Research Technology and Engineering 2017

    Science.gov (United States)

    Voracek, David F. (Editor)

    2018-01-01

    I am delighted to present this report of accomplishments at NASA's Armstrong Flight Research Center. Our dedicated innovators possess a wealth of performance, safety, and technical capabilities spanning a wide variety of research areas involving aircraft, electronic sensors, instrumentation, environmental and earth science, celestial observations, and much more. They not only perform tasks necessary to safely and successfully accomplish Armstrong's flight research and test missions but also support NASA missions across the entire Agency. Armstrong's project teams have successfully accomplished many of the nation's most complex flight research projects by crafting creative solutions that advance emerging technologies from concept development and experimental formulation to final testing. We are developing and refining technologies for ultra-efficient aircraft, electric propulsion vehicles, a low boom flight demonstrator, air launch systems, and experimental x-planes, to name a few. Additionally, with our unique location and airborne research laboratories, we are testing and validating new research concepts. Summaries of each project highlighting key results and benefits of the effort are provided in the following pages. Technology areas for the projects include electric propulsion, vehicle efficiency, supersonics, space and hypersonics, autonomous systems, flight and ground experimental test technologies, and much more. Additional technical information is available in the appendix, as well as contact information for the Principal Investigator of each project. I am proud of the work we do here at Armstrong and am pleased to share these details with you. We welcome opportunities for partnership and collaboration, so please contact us to learn more about these cutting-edge innovations and how they might align with your needs.

  13. Refractory Research Group - U.S. DOE, Albany Research Center [Institution Profile

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, James P.

    2004-09-01

    The refractory research group at the Albany Research Center (ARC) has a long history of conducting materials research within the U.S. Bureau of Mines, and more recently, within the U.S. Dept. of Energy. When under the U.S. Bureau of Mines, research was driven by national needs to develop substitute materials and to conserve raw materials. This mission was accomplished by improving refractory material properties and/or by recycling refractories using critical and strategic materials. Currently, as a U.S. Dept of Energy Fossil Energy field site, research is driven primarily by the need to assist DOE in meeting its vision to develop economically and environmentally viable technologies for the production of electricity from fossil fuels. Research at ARC impacts this vision by: • Providing information on the performance characteristics of materials being specified for the current generation of power systems; • Developing cost-effective, high performance materials for inclusion in the next generation of fossil power systems; and • Solving environmental emission and waste problems related to fossil energy systems. A brief history of past refractory research within the U.S. Bureau of Mines, the current refractory research at ARC, and the equipment and capabilities used to conduct refractory research at ARC will be discussed.

  14. The Design of HVAC System in the Conventional Facility of Proton Accelerator Research Center

    International Nuclear Information System (INIS)

    Jeon, G. P.; Kim, J. Y.; Choi, B. H.

    2007-01-01

    The HVAC systems for conventional facility of Proton Accelerator Research Center consist of 3 systems : accelerator building HVAC system, beam application building HVAC system and miscellaneous HVAC system. We designed accelerator building HVAC system and beam application research area HVAC system in the conventional facilities of Proton Accelerator research center. Accelerator building HVAC system is divided into accelerator tunnel area, klystron area, klystron gallery area, accelerator assembly area. Also, Beam application research area HVAC system is divided into those of beam experimental hall, accelerator control area, beam application research area and Ion beam application building. In this paper, We described system design requirements and explained system configuration for each systems. We presented operation scenario of HVAC system in the Conventional Facility of Proton Accelerator Research Center

  15. Center for Space Transportation and Applied Research Fifth Annual Technical Symposium Proceedings

    Science.gov (United States)

    1993-01-01

    This Fifth Annual Technical Symposium, sponsored by the UT-Calspan Center for Space Transportation and Applied Research (CSTAR), is organized to provide an overview of the technical accomplishments of the Center's five Research and Technology focus areas during the past year. These areas include chemical propulsion, electric propulsion, commerical space transportation, computational methods, and laser materials processing. Papers in the area of artificial intelligence/expert systems are also presented.

  16. Center for Cancer Research plays key role in first FDA-approved drug for treatment of Merkel cell carcinoma | Center for Cancer Research

    Science.gov (United States)

    The Center for Cancer Research’s ability to rapidly deploy integrated basic and clinical research teams at a single site facilitated the rapid FDA approval of the immunotherapy drug avelumab for metastatic Merkel cell carcinoma, a rare, aggressive form of skin cancer. Learn more...  

  17. Research and Technology at the John F. Kennedy Space Center 1993

    Science.gov (United States)

    1993-01-01

    As the NASA Center responsible for assembly, checkout, servicing, launch, recovery, and operational support of Space Transportation System elements and payloads, the John F. Kennedy Space Center is placing increasing emphasis on its advanced technology development program. This program encompasses the efforts of the Engineering Development Directorate laboratories, most of the KSC operations contractors, academia, and selected commercial industries - all working in a team effort within their own areas of expertise. This edition of the Kennedy Space Center Research and Technology 1993 Annual Report covers efforts of all these contributors to the KSC advanced technology development program, as well as our technology transfer activities. Major areas of research include material science, advanced software, industrial engineering, nondestructive evaluation, life sciences, atmospheric sciences, environmental technology, robotics, and electronics and instrumentation.

  18. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    Science.gov (United States)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  19. Climate research in the former Soviet Union. FASAC: Foreign Applied Sciences Assessment Center technical assessment report

    Energy Technology Data Exchange (ETDEWEB)

    Ellingson, R.G.; Baer, F.; Ellsaesser, H.W.; Harshvardhan; Hoffert, M.I.; Randall, D.A.

    1993-09-01

    This report assesses the state of the art in several areas of climate research in the former Soviet Union. This assessment was performed by a group of six internationally recognized US experts in related fields. The areas chosen for review are: large-scale circulation processes in the atmosphere and oceans; atmospheric radiative processes; cloud formation processes; climate effects of natural atmospheric disturbances; and the carbon cycle, paleoclimates, and general circulation model validation. The study found an active research community in each of the above areas. Overall, the quality of climate research in the former Soviet Union is mixed, although the best Soviet work is as good as the best corresponding work in the West. The best Soviet efforts have principally been in theoretical studies or data analysis. However, an apparent lack of access to modern computing facilities has severely hampered the Soviet research. Most of the issues considered in the Soviet literature are known, and have been discussed in the Western literature, although some extraordinary research in paleoclimatology was noted. Little unusual and exceptionally creative material was found in the other areas during the study period (1985 through 1992). Scientists in the former Soviet Union have closely followed the Western literature and technology. Given their strengths in theoretical and analytical methods, as well as their possession of simplified versions of detailed computer models being used in the West, researchers in the former Soviet Union have the potential to make significant contributions if supercomputers, workstations, and software become available. However, given the current state of the economy in the former Soviet Union, it is not clear that the computer gap will be bridged in the foreseeable future.

  20. Environmental monitoring and research at the John F. Kennedy Space Center

    Science.gov (United States)

    Hall, C. R.; Hinkle, C. R.; Knott, W. M.; Summerfield, B. R.

    1992-01-01

    The Biomedical Operations and Research Office at the NASA John F. Kennedy Space Center has been supporting environmental monitoring and research since the mid-1970s. Program elements include monitoring of baseline conditions to document natural variability in the ecosystem, assessments of operations and construction of new facilities, and ecological research focusing on wildlife habitat associations. Information management is centered around development of a computerized geographic information system that incorporates remote sensing and digital image processing technologies along with traditional relational data base management capabilities. The proactive program is one in which the initiative is to anticipate potential environmental concerns before they occur and, by utilizing in-house expertise, develop impact minimization or mitigation strategies to reduce environmental risk.

  1. Physical Measurement Profile at Gilgel Gibe Field Research Center ...

    African Journals Online (AJOL)

    Physical Measurement Profile at Gilgel Gibe Field Research Center, ... hip circumference in under 35 years and body mass index in under 45 year age groups were ... Comparison with findings in other parts of the world showed that Ethiopians ...

  2. Annual report of Tokyo Metropolitan Isotope Research Center, fiscal year 1994

    International Nuclear Information System (INIS)

    1995-01-01

    This Research Center was founded in 1959, and has carried out the research, test and guidance regarding industry, agriculture and fishry, medical treatment, environment preservation and radiation protection by utilizing the merits of radiation and based on the needs of the age, thus obtained good results. Recently, by the adoption of an ion accelerator, the application to advanced technology such as ion implantation has been tried, and the development of advanced measurement technology using PIXE and RBS has been carried out. In fiscal year 1994, the results have been obtained in the research 'Development of the method of evaluating the nobility of noble metal products by nondestructive inspection'. Also the research on the development of the technology for utilizing the ion accelerator was started. The researches on making radiation breeding of Chinese cabbage efficient and the business of technically supporting the activation of medium and small enterprises were advanced. The history, organization and budget of this Research Center, the reports of research and investigation, safety control, the publication of research, events and others are reported. (K.I.)

  3. Proposed Development of NASA Glenn Research Center's Aeronautical Network Research Simulator

    Science.gov (United States)

    Nguyen, Thanh C.; Kerczewski, Robert J.; Wargo, Chris A.; Kocin, Michael J.; Garcia, Manuel L.

    2004-01-01

    Accurate knowledge and understanding of data link traffic loads that will have an impact on the underlying communications infrastructure within the National Airspace System (NAS) is of paramount importance for planning, development and fielding of future airborne and ground-based communications systems. Attempting to better understand this impact, NASA Glenn Research Center (GRC), through its contractor Computer Networks & Software, Inc. (CNS, Inc.), has developed an emulation and test facility known as the Virtual Aircraft and Controller (VAC) to study data link interactions and the capacity of the NAS to support Controller Pilot Data Link Communications (CPDLC) traffic. The drawback of the current VAC test bed is that it does not allow the test personnel and researchers to present a real world RF environment to a complex airborne or ground system. Fortunately, the United States Air Force and Navy Avionics Test Commands, through its contractor ViaSat, Inc., have developed the Joint Communications Simulator (JCS) to provide communications band test and simulation capability for the RF spectrum through 18 GHz including Communications, Navigation, and Identification and Surveillance functions. In this paper, we are proposing the development of a new and robust test bed that will leverage on the existing NASA GRC's VAC and the Air Force and Navy Commands JCS systems capabilities and functionalities. The proposed NASA Glenn Research Center's Aeronautical Networks Research Simulator (ANRS) will combine current Air Traffic Control applications and physical RF stimulation into an integrated system capable of emulating data transmission behaviors including propagation delay, physical protocol delay, transmission failure and channel interference. The ANRS will provide a simulation/stimulation tool and test bed environment that allow the researcher to predict the performance of various aeronautical network protocol standards and their associated waveforms under varying

  4. University of Washington Center for Child Environmental Health Risks Research

    Data.gov (United States)

    Federal Laboratory Consortium — The theme of the University of Washington based Center for Child Environmental Health Risks Research (CHC) is understanding the biochemical, molecular and exposure...

  5. Energy Efficient Industrialized Housing Research Program, Center for Housing Innovation, University of Oregon and the Florida Solar Energy Center

    Energy Technology Data Exchange (ETDEWEB)

    Brown, G.Z.

    1990-01-01

    This research program addresses the need to increase the energy efficiency of industrialized housing. Two research centers have responsibility for the program: the Center for Housing Innovation at the University of Oregon and the Florida Solar Energy Center, a research institute of the University of Central Florida. The two organizations provide complementary architectural, systems engineering, and industrial engineering capabilities. In 1989 we worked on these tasks: (1) the formation of a steering committee, (2) the development of a multiyear research plan, (3) analysis of the US industrialized housing industry, (4) assessment of foreign technology, (5) assessment of industrial applications, (6) analysis of computerized design and evaluation tools, and (7) assessment of energy performance of baseline and advanced industrialized housing concepts. The current research program, under the guidance of a steering committee composed of industry and government representatives, focuses on three interdependent concerns -- (1) energy, (2) industrial process, and (3) housing design. Building homes in a factory offers the opportunity to increase energy efficiency through the use of new materials and processes, and to increase the value of these homes by improving the quality of their construction. Housing design strives to ensure that these technically advanced homes are marketable and will meet the needs of the people who will live in them.

  6. Critical Appraisal of Translational Research Models for Suitability in Performance Assessment of Cancer Centers

    NARCIS (Netherlands)

    Rajan, Abinaya; Sullivan, Richard; Bakker, Suzanne; van Harten, Willem H.

    2012-01-01

    Background. Translational research is a complex cumulative process that takes time. However, the operating environment for cancer centers engaged in translational research is now financially insecure. Centers are challenged to improve results and reduce time from discovery to practice innovations.

  7. Pinon-juniper management research at Corona Range and Livestock Research Center in Central New Mexico

    Science.gov (United States)

    Andres Cibils; Mark Petersen; Shad Cox; Michael Rubio

    2008-01-01

    Description: New Mexico State University's Corona Range and Livestock Research Center (CRLRC) is located in a pinon-juniper (PJ)/grassland ecotone in the southern Basin and Range Province in south central New Mexico. A number of research projects conducted at this facility revolve around soil, plant, livestock, and wildlife responses to PJ woodland management. The...

  8. Nuclear Research Center Karlsruhe, Central Safety Department. Annual report 1992

    International Nuclear Information System (INIS)

    Koelzer, W.

    1993-05-01

    The Central Safety Department is responsible for handling all problems of radiation protection, safety and security of the institutes and departments of the Karlsruhe Nuclear Research Center, for waste water activity measurements and environmental monitoring of the whole area of the Center, and for research and development work mainly focusing on nuclear safety and radiation protection measures. The research and development work concentrates on the following aspects: Physical and chemical behavior of trace elements in the environment, biophysics of multicellular systems, behavior of tritium in the air/soil-plant system, improvement in radiation protection measurement and personnel dosimetry. This report gives details of the different duties, indicates the results of 1992 routine tasks and reports about results of investigations and developments of the working groups of the Department. The reader is referred to the English translation of Chapter 1 describing the duties and organization of the Central Safety Department. (orig.) [de

  9. Collaborative Aerospace Research and Fellowship Program at NASA Glenn Research Center

    Science.gov (United States)

    Heyward, Ann O.; Kankam, Mark D.

    2004-01-01

    During the summer of 2004, a 10-week activity for university faculty entitled the NASA-OAI Collaborative Aerospace Research and Fellowship Program (CFP) was conducted at the NASA Glenn Research Center in collaboration with the Ohio Aerospace Institute (OAI). This is a companion program to the highly successful NASA Faculty Fellowship Program and its predecessor, the NASA-ASEE Summer Faculty Fellowship Program that operated for 38 years at Glenn. The objectives of CFP parallel those of its companion, viz., (1) to further the professional knowledge of qualified engineering and science faculty,(2) to stimulate an exchange of ideas between teaching participants and employees of NASA, (3) to enrich and refresh the research and teaching activities of participants institutions, and (4) to contribute to the research objectives of Glenn. However, CFP, unlike the NASA program, permits faculty to be in residence for more than two summers and does not limit participation to United States citizens. Selected fellows spend 10 weeks at Glenn working on research problems in collaboration with NASA colleagues and participating in related activities of the NASA-ASEE program. This year's program began officially on June 1, 2004 and continued through August 7, 2004. Several fellows had program dates that differed from the official dates because university schedules vary and because some of the summer research projects warranted a time extension beyond the 10 weeks for satisfactory completion of the work. The stipend paid to the fellows was $1200 per week and a relocation allowance of $1000 was paid to those living outside a 50-mile radius of the Center. In post-program surveys from this and previous years, the faculty cited numerous instances where participation in the program has led to new courses, new research projects, new laboratory experiments, and grants from NASA to continue the work initiated during the summer. Many of the fellows mentioned amplifying material, both in

  10. Renata Adler Memorial Research Center for Child Welfare and Protection, Tel-Aviv University

    Science.gov (United States)

    Ronen, Tammie

    2011-01-01

    The Renata Adler Memorial Research Center for Child Welfare and Protection operates within the Bob Shapell School of Social Work at Tel-Aviv University in Israel. The main aims of this research center are to facilitate study and knowledge about the welfare of children experiencing abuse or neglect or children at risk and to link such knowledge to…

  11. Virtual laboratory for fusion research in Japan

    International Nuclear Information System (INIS)

    Tsuda, K.; Nagayama, Y.; Yamamoto, T.; Horiuchi, R.; Ishiguro, S.; Takami, S.

    2008-01-01

    A virtual laboratory system for nuclear fusion research in Japan has been developed using SuperSINET, which is a super high-speed network operated by National Institute of Informatics. Sixteen sites including major Japanese universities, Japan Atomic Energy Agency and National Institute for Fusion Science (NIFS) are mutually connected to SuperSINET with the speed of 1 Gbps by the end of 2006 fiscal year. Collaboration categories in this virtual laboratory are as follows: the large helical device (LHD) remote participation; the remote use of supercomputer system; and the all Japan ST (Spherical Tokamak) research program. This virtual laboratory is a closed network system, and is connected to the Internet through the NIFS firewall in order to keep higher security. Collaborators in a remote station can control their diagnostic devices at LHD and analyze the LHD data as they were at the LHD control room. Researchers in a remote station can use the supercomputer of NIFS in the same environment as NIFS. In this paper, we will describe detail of technologies and the present status of the virtual laboratory. Furthermore, the items that should be developed in the near future are also described

  12. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    Science.gov (United States)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  13. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  14. Managing environmental enhancement plans for individual research projects at a national primate research center.

    Science.gov (United States)

    Thom, Jinhee P; Crockett, Carolyn M

    2008-05-01

    We describe a method for managing environmental enhancement plans for individual research projects at a national primate research center where most monkeys are assigned to active research projects. The Psychological Well-being Program (PWB) at the University of Washington National Primate Research Center developed an Environmental Enhancement Plan form (EEPL) that allows PWB to quantify and track changes in enrichment allowances over time while ensuring that each animal is provided with as much enrichment as possible without compromising research. Very few projects involve restrictions on toys or perches. Some projects have restrictions on food treats and foraging, primarily involving the provision of these enrichments by research staff instead of husbandry staff. Restrictions are not considered exemptions unless they entirely prohibit an element of the University of Washington Environmental Enhancement Plan (UW EE Plan). All exemptions must be formally reviewed and approved by the institutional animal care and use committee. Most exemptions from elements of the UW EE Plan involve social housing. Between 2004 and 2006, the percentage of projects with no social contact restrictions increased by 1%, but those prohibiting any tactile social contact declined by 7%, and projects permitting tactile social contact during part of the study increased by 9%. The EEPL form has facilitated informing investigators about the enrichment their monkeys will receive if no restrictions or exemptions are requested and approved. The EEPL form also greatly enhances PWB's ability to coordinate the specific enrichment requirements of a project.

  15. MEGADOCK 4.0: an ultra-high-performance protein-protein docking software for heterogeneous supercomputers.

    Science.gov (United States)

    Ohue, Masahito; Shimoda, Takehiro; Suzuki, Shuji; Matsuzaki, Yuri; Ishida, Takashi; Akiyama, Yutaka

    2014-11-15

    The application of protein-protein docking in large-scale interactome analysis is a major challenge in structural bioinformatics and requires huge computing resources. In this work, we present MEGADOCK 4.0, an FFT-based docking software that makes extensive use of recent heterogeneous supercomputers and shows powerful, scalable performance of >97% strong scaling. MEGADOCK 4.0 is written in C++ with OpenMPI and NVIDIA CUDA 5.0 (or later) and is freely available to all academic and non-profit users at: http://www.bi.cs.titech.ac.jp/megadock. akiyama@cs.titech.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  16. Research Priority Setting for Social Determinants of Health Research Center of Shahid Beheshti University of Medical Sciences in 2013

    Directory of Open Access Journals (Sweden)

    Mohammad-Reza Sohrabi

    2015-02-01

    Full Text Available Background and objective: It is obvious that, because of the lack of resources, we should devote our limited resources to priorities in order to reach an acceptable level of health. The objective of this study was to research priority setting for Pediatric Surgery Research Center; with the participation of all stakeholders.Material and Methods: This is a Health System Research (HSR project in order to apply governance and leadership issues with the participation of 41 people including faculty members in Pediatric Surgery Research Center, Shahid Beheshti Medical University and the other pediatric specialists and health system stakeholders as well as the people associated with health system inside & outside the university. This was performed in 2010 using the Council on Health Research for Development COHRED( model with little change. Based on the model, at first the stakeholders were identified and the field situation of Pediatric Surgery was analyzed. Then, research areas and titles were specified and research priorities were set out by giving scores according to the criteria.Results: The seven obtained research areas in priority order are included pediatric trauma, pediatric cancers, pediatric urology diseases, undescended testicles in children, developmental genetics & congenital defects, emergency in children and application of laparoscopic surgery in children. Because each of the research areas is composed of multiple subareas, we managed to finally specify 43 research subareas as research priorities. These subareas included epidemiology, risk factors, prevention, screening, diagnosis and treatment. They also included follow-up, complications, knowledge & attitudes of parents, quality of life, economy aspects and data bank for further research.Conclusion: In this project, research priorities were set out for Pediatric Surgery Research Center of Shahid Beheshti University of Medical Sciences, with the participation of all the stakeholders

  17. 34 CFR 350.1 - What is the Disability and Rehabilitation Research Projects and Centers Program?

    Science.gov (United States)

    2010-07-01

    ...) Rehabilitation Engineering Research Centers. (Authority: Sec. 204; 29 U.S.C. 762) ... 34 Education 2 2010-07-01 2010-07-01 false What is the Disability and Rehabilitation Research... DISABILITY AND REHABILITATION RESEARCH PROJECTS AND CENTERS PROGRAM General § 350.1 What is the Disability...

  18. Space Weather Forecasting and Research at the Community Coordinated Modeling Center

    Science.gov (United States)

    Aronne, M.

    2015-12-01

    The Space Weather Research Center (SWRC), within the Community Coordinated Modeling Center (CCMC), provides experimental research forecasts and analysis for NASA's robotic mission operators. Space weather conditions are monitored to provide advance warning and forecasts based on observations and modeling using the integrated Space Weather Analysis Network (iSWA). Space weather forecasters come from a variety of backgrounds, ranging from modelers to astrophysicists to undergraduate students. This presentation will discuss space weather operations and research from an undergraduate perspective. The Space Weather Research, Education, and Development Initiative (SW REDI) is the starting point for many undergraduate opportunities in space weather forecasting and research. Space weather analyst interns play an active role year-round as entry-level space weather analysts. Students develop the technical and professional skills to forecast space weather through a summer internship that includes a two week long space weather boot camp, mentorship, poster session, and research opportunities. My unique development of research projects includes studying high speed stream events as well as a study of 20 historic, high-impact solar energetic particle events. This unique opportunity to combine daily real-time analysis with related research prepares students for future careers in Heliophysics.

  19. Molecular Science Research Center 1992 annual report

    Energy Technology Data Exchange (ETDEWEB)

    Knotek, M.L.

    1994-01-01

    The Molecular Science Research Center is a designated national user facility, available to scientists from universities, industry, and other national laboratories. After an opening section, which includes conferences hosted, appointments, and projects, this document presents progress in the following fields: chemical structure and dynamics; environmental dynamics and simulation; macromolecular structure and dynamics; materials and interfaces; theory, modeling, and simulation; and computing and information sciences. Appendices are included: MSRC staff and associates, 1992 publications and presentations, activities, and acronyms and abbreviations.

  20. Data Mining Supercomputing with SAS JMP® Genomics

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2011-02-01

    Full Text Available JMP® Genomics is statistical discovery software that can uncover meaningful patterns in high-throughput genomics and proteomics data. JMP® Genomics is designed for biologists, biostatisticians, statistical geneticists, and those engaged in analyzing the vast stores of data that are common in genomic research (SAS, 2009. Data mining was performed using JMP® Genomics on the two collections of microarray databases available from National Center for Biotechnology Information (NCBI for lung cancer and breast cancer. The Gene Expression Omnibus (GEO of NCBI serves as a public repository for a wide range of highthroughput experimental data, including the two collections of lung cancer and breast cancer that were used for this research. The results for applying data mining using software JMP® Genomics are shown in this paper with numerous screen shots.

  1. R&D Characteristics and Organizational Structure: Case Studies of University-Industry Research Centers

    Science.gov (United States)

    Hart, Maureen McArthur

    2013-01-01

    Within the past few decades, university-industry research centers have been developed in large numbers and emphasized as a valuable policy tool for innovation. Yet little is known about the heterogeneity of organizational structure within these centers, which has implications regarding policy for and management of these centers. This dissertation…

  2. NASA Glenn Research Center Experience with "LENR Phenomenon"

    Science.gov (United States)

    Wrbanek, Susan Y.; Fralick, Gustave C.; Wrbanek, John D.; Niedra, Janis M.

    2012-01-01

    Since 1989 NASA Glenn Research Center (GRC) has performed some small-scale limited experiments that show evidence of effects claimed by some to be evidence of Low Energy Nuclear Reactions (LENR). The research at GRC has involved observations and work on measurement techniques for observing the temperature effects in reactions of isotopes of hydrogen with palladium hydrides. The various experiments performed involved loading Pd with gaseous H2 and D2, and exposing Pd thin films to multi-bubble sonoluminescence in regular and deuterated water. An overview of these experiments and their results will be presented.

  3. NASA Glenn Research Center Experience with LENR Phenomenon

    Science.gov (United States)

    Wrbanek, Susan Y.; Fralick, Gustave C.; Wrbanek, John D.; Niedra, Janis M.

    2012-01-01

    Since 1989 NASA Glenn Research Center (GRC) has performed some small-scale limited experiments that show evidence of effects claimed by some to be evidence of Low Energy Nuclear Reactions (LENR). The research at GRC has involved observations and work on measurement techniques for observing the temperature effects in reactions of isotopes of hydrogen with palladium hydrides. The various experiments performed involved loading Pd with gaseous H2 and D2, and exposing Pd thin films to multi-bubble sonoluminescence in regular and deuterated water. An overview of these experiments and their results will be presented.

  4. Researchers studying alternative to bladder removal for bladder cancer patients | Center for Cancer Research

    Science.gov (United States)

    A new phase I clinical trial conducted by researchers at the Center for Cancer Research (CCR) is evaluating the safety and tolerability, or the degree to which any side effects can be tolerated by patients, of a two-drug combination as a potential alternative to bladder removal for bladder cancer patients. The trial targets patients with non-muscle invasive bladder cancer (NMIBC) whose cancers have stopped responding to traditional therapies. Read more...

  5. Twenty-fifth anniversary of the Juelich Nuclear Research Center

    International Nuclear Information System (INIS)

    Haefele, W.

    1982-01-01

    On December 10, 1981, KFA Juelich celebrated its 25th year of existence; on December 11, 1956, the land parliament of North Rhine Westphalia had decided in favour of the erection of a joint nuclear research facility of the land of North Rhine Westphalia. In contrast to other nuclear research centers, the Juelich centre was to develop and operate large-scale research equipment and infrastructure for joint use by the universities of the land. This cooperation has remained an important characteristic in spite of the independent scientific work of KFA institutes, Federal government majorities, and changes in research fields and tasks. KFA does fundamental research in nuclear and plasma physics, solid state research, medicine, life sciences, and environmental research; other activities are R + D tasks for the HTR reactor and its specific applications as well as energy research in general. (orig.) [de

  6. Re:Centering Adult Education Research: Whose World Is First?

    Science.gov (United States)

    Hall, Budd L.

    1993-01-01

    The discourse of adult education research needs to be reframed to place at the center the issues and concerns of the majority of the world's people who live in poverty, ill health, and insecurity and at the margins the concerns of the rich and powerful. (SK)

  7. Effluent Monitoring System Design for the Proton Accelerator Research Center of PEFP

    International Nuclear Information System (INIS)

    Kim, Jun Yeon; Mun, Kyeong Jun; Cho, Jang Hyung; Jo, Jeong Hee

    2010-01-01

    Since host site host site was selected Gyeong-ju city in January, 2006. we need design revision of Proton Accelerator research center to reflect on host site characteristics and several conditions. Also the IAC recommended maximization of space utilization and construction cost saving. After GA(General Arrangement) is made a decision, it is necessary to evaluate the radiation analysis of every controlled area in the proton accelerator research center such as accelerator tunnel, Klystron gallery, beam experimental hall, target rooms and ion beam application building to keep dose rate below the ALARA(As Low As Reasonably achievable) objective. Our staff has reviewed and made a shielding design of them. In this paper, According to accelerator operation mode and access conditions based on radiation analysis and shielding design, we made the exhaust system configuration of controlled area in the proton accelerator research center. Also, we installed radiation monitor and set its alarm value for each radiation area

  8. A 5-year scientometric analysis of research centers affiliated to Tehran University of Medical Sciences

    Science.gov (United States)

    Yazdani, Kamran; Rahimi-Movaghar, Afarin; Nedjat, Saharnaz; Ghalichi, Leila; Khalili, Malahat

    2015-01-01

    Background: Since Tehran University of Medical Sciences (TUMS) has the oldest and highest number of research centers among all Iranian medical universities, this study was conducted to evaluate scientific output of research centers affiliated to Tehran University of Medical Sciences (TUMS) using scientometric indices and the affecting factors. Moreover, a number of scientometric indicators were introduced. Methods: This cross-sectional study was performed to evaluate a 5-year scientific performance of research centers of TUMS. Data were collected through questionnaires, annual evaluation reports of the Ministry of Health, and also from Scopus database. We used appropriate measures of central tendency and variation for descriptive analyses. Moreover, uni-and multi-variable linear regression were used to evaluate the effect of independent factors on the scientific output of the centers. Results: The medians of the numbers of papers and books during a 5-year period were 150.5 and 2.5 respectively. The median of the "articles per researcher" was 19.1. Based on multiple linear regression, younger age centers (p=0.001), having a separate budget line (p=0.016), and number of research personnel (p<0.001) had a direct significant correlation with the number of articles while real properties had a reverse significant correlation with it (p=0.004). Conclusion: The results can help policy makers and research managers to allocate sufficient resources to improve current situation of the centers. Newly adopted and effective scientometric indices are is suggested to be used to evaluate scientific outputs and functions of these centers. PMID:26157724

  9. The development of a clinical outcomes survey research application: Assessment Center.

    Science.gov (United States)

    Gershon, Richard; Rothrock, Nan E; Hanrahan, Rachel T; Jansky, Liz J; Harniss, Mark; Riley, William

    2010-06-01

    The National Institutes of Health sponsored Patient-Reported Outcome Measurement Information System (PROMIS) aimed to create item banks and computerized adaptive tests (CATs) across multiple domains for individuals with a range of chronic diseases. Web-based software was created to enable a researcher to create study-specific Websites that could administer PROMIS CATs and other instruments to research participants or clinical samples. This paper outlines the process used to develop a user-friendly, free, Web-based resource (Assessment Center) for storage, retrieval, organization, sharing, and administration of patient-reported outcomes (PRO) instruments. Joint Application Design (JAD) sessions were conducted with representatives from numerous institutions in order to supply a general wish list of features. Use Cases were then written to ensure that end user expectations matched programmer specifications. Program development included daily programmer "scrum" sessions, weekly Usability Acceptability Testing (UAT) and continuous Quality Assurance (QA) activities pre- and post-release. Assessment Center includes features that promote instrument development including item histories, data management, and storage of statistical analysis results. This case study of software development highlights the collection and incorporation of user input throughout the development process. Potential future applications of Assessment Center in clinical research are discussed.

  10. Bibliography of Lewis Research Center Technical Publications announced in 1991

    Science.gov (United States)

    1992-01-01

    This compilation of abstracts describes and indexes the technical reporting that resulted from the scientific engineering work performed and managed by the Lewis Research Center in 1991. All the publications were announced in the 1991 issues of STAR (Scientific and Technical Aerospace Reports) and/or IAA (International Aerospace Abstracts). Included are research reports, journal articles, conference presentations, patents and patent applications, and theses.

  11. The Amistad Research Center: Documenting the African American Experience.

    Science.gov (United States)

    Chepesiuk, Ron

    1993-01-01

    Describes the Amistad Research Center housed at Tulane University which is a repository of primary documents on African-American history. Topics addressed include the development and growth of the collection; inclusion of the American Missionary Association archives; sources of support; civil rights; and collecting for the future. (LRW)

  12. Does Every Research Library Need a Digital Humanities Center?

    Science.gov (United States)

    Schaffner, Jennifer; Erway, Ricky

    2014-01-01

    The digital humanities (DH) are attracting considerable attention and funding at the same time that this nascent field is striving for an identity. Some research libraries are making significant investments by creating digital humanities centers. However, questions about whether such investments are warranted persist across the cultural heritage…

  13. Computing at the leading edge: Research in the energy sciences

    Energy Technology Data Exchange (ETDEWEB)

    Mirin, A.A.; Van Dyke, P.T. [eds.

    1994-02-01

    The purpose of this publication is to highlight selected scientific challenges that have been undertaken by the DOE Energy Research community. The high quality of the research reflected in these contributions underscores the growing importance both to the Grand Challenge scientific efforts sponsored by DOE and of the related supporting technologies that the National Energy Research Supercomputer Center (NERSC) and other facilities are able to provide. The continued improvement of the computing resources available to DOE scientists is prerequisite to ensuring their future progress in solving the Grand Challenges. Titles of articles included in this publication include: the numerical tokamak project; static and animated molecular views of a tumorigenic chemical bound to DNA; toward a high-performance climate systems model; modeling molecular processes in the environment; lattice Boltzmann models for flow in porous media; parallel algorithms for modeling superconductors; parallel computing at the Superconducting Super Collider Laboratory; the advanced combustion modeling environment; adaptive methodologies for computational fluid dynamics; lattice simulations of quantum chromodynamics; simulating high-intensity charged-particle beams for the design of high-power accelerators; electronic structure and phase stability of random alloys.

  14. Computing at the leading edge: Research in the energy sciences

    International Nuclear Information System (INIS)

    Mirin, A.A.; Van Dyke, P.T.

    1994-01-01

    The purpose of this publication is to highlight selected scientific challenges that have been undertaken by the DOE Energy Research community. The high quality of the research reflected in these contributions underscores the growing importance both to the Grand Challenge scientific efforts sponsored by DOE and of the related supporting technologies that the National Energy Research Supercomputer Center (NERSC) and other facilities are able to provide. The continued improvement of the computing resources available to DOE scientists is prerequisite to ensuring their future progress in solving the Grand Challenges. Titles of articles included in this publication include: the numerical tokamak project; static and animated molecular views of a tumorigenic chemical bound to DNA; toward a high-performance climate systems model; modeling molecular processes in the environment; lattice Boltzmann models for flow in porous media; parallel algorithms for modeling superconductors; parallel computing at the Superconducting Super Collider Laboratory; the advanced combustion modeling environment; adaptive methodologies for computational fluid dynamics; lattice simulations of quantum chromodynamics; simulating high-intensity charged-particle beams for the design of high-power accelerators; electronic structure and phase stability of random alloys

  15. Radiation protection at the Cadarache research center

    International Nuclear Information System (INIS)

    Anon.

    2015-01-01

    This article recalls the French law about radiation protection and its evolution due to the implementation of the 2013/59-EURATOM directive that separates the missions of counsel from the more operative missions of the person appointed as 'competent in radiation protection'. The organisation of the radiation protection of the Cadarache research center is presented. The issue of sub-contracting and the respect of an adequate standard of radioprotection is detailed since 2 facilities operated by AREVA are being dismantled on the site. (A.C.)

  16. Atomic, Nuclear and Molecular Research Center CICANUM

    International Nuclear Information System (INIS)

    Loria Meneses, Luis Guillermo

    2011-01-01

    CICANUM has a Gamma Spectroscopy Laboratory, has been the laboratory official, appointed by the Ministerio de Agricultura in Costa Rica to analyze export products (for human consumption and animal), also, to determine radioactive contamination. The Laboratory has four systems using germanium detectors and canberra technology, including software Genie 2000 to establish the activity of cesium, iodine and natural gamma emitters in solid or liquid samples for food products, sediments and rocks. This Laboratory belongs to the Universidad de Costa Rica which has different institutes and research centers

  17. Michael F. Crowley | NREL

    Science.gov (United States)

    Pittsburgh Supercomputing Center, and at The Scripps Research Institute with David Case and Charles Brooks NIH-funded collaboration with Professor Charles L. Brooks III at the University of Michigan Areas of to determine source of twisting in hydrogen bonding patterns. This modeling will help to understand

  18. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to

  19. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  20. Small UAS Test Area at NASA's Dryden Flight Research Center

    Science.gov (United States)

    Bauer, Jeffrey T.

    2008-01-01

    This viewgraph presentation reviews the areas that Dryden Flight Research Center has set up for testing small Unmanned Aerial Systems (UAS). It also reviews the requirements and process to use an area for UAS test.

  1. The status of shielding research at Tajoura research center

    International Nuclear Information System (INIS)

    El-Bakkoush, F.A.

    2005-01-01

    This paper gives a description to the shielding research activities which have been carried-out at the radiation shielding group ,Tajoura Research Center. This includes the design of different types of concrete shields made from local aggregates which have suitable radiation attenuation properties. These include, Ordinary Concrete(with density p = 2.3 ton/m3) heavy weight concrete (with density p =3.6 ton/m3) and heat resistant concrete with aggregates having bound- in water. Investigation have been carried -out by measuring the neutron and gamma-rays spectra which have been transmitted through barriers having different thickness. These were performed using a collimated beam of reactor neutrons and gamma-ray transmitted from the horizontal channel no 1 of Tajoura-Research reactor with 10 MW Max ape rating power. The transmitted fast neutron and gamma spectra were measured by neutron-gamma spectrometer employing NE-213 liquid organic scintillater. Discrimination of against undesired pulses of neutrons or gamma-ray was achieved by a pulse shape discrimination method based on differences in the shape of the decay part of the emitted pulses. The obtained results are presented in the form of displayed neutron and gamma spectra measured behind different thickness of the investigated concrete shield. These spectra were used to derive the macroscopic cross section for at different energy for material under investigation

  2. High Power MPD Thruster Development at the NASA Glenn Research Center

    Science.gov (United States)

    LaPointe, Michael R.; Mikellides, Pavlos G.; Reddy, Dhanireddy (Technical Monitor)

    2001-01-01

    Propulsion requirements for large platform orbit raising, cargo and piloted planetary missions, and robotic deep space exploration have rekindled interest in the development and deployment of high power electromagnetic thrusters. Magnetoplasmadynamic (MPD) thrusters can effectively process megawatts of power over a broad range of specific impulse values to meet these diverse in-space propulsion requirements. As NASA's lead center for electric propulsion, the Glenn Research Center has established an MW-class pulsed thruster test facility and is refurbishing a high-power steady-state facility to design, build, and test efficient gas-fed MPD thrusters. A complimentary numerical modeling effort based on the robust MACH2 code provides a well-balanced program of numerical analysis and experimental validation leading to improved high power MPD thruster performance. This paper reviews the current and planned experimental facilities and numerical modeling capabilities at the Glenn Research Center and outlines program plans for the development of new, efficient high power MPD thrusters.

  3. Energy Frontier Research Center, Center for Materials Science of Nuclear Fuels

    Energy Technology Data Exchange (ETDEWEB)

    Todd R. Allen, Director

    2011-04-01

    The Office of Science, Basic Energy Sciences, has funded the INL as one of the Energy Frontier Research Centers in the area of material science of nuclear fuels. This document is the required annual report to the Office of Science that outlines the accomplishments for the period of May 2010 through April 2011. The aim of the Center for Material Science of Nuclear Fuels (CMSNF) is to establish the foundation for predictive understanding of the effects of irradiation-induced defects on thermal transport in oxide nuclear fuels. The science driver of the center’s investigation is to understand how complex defect and microstructures affect phonon mediated thermal transport in UO2, and achieve this understanding for the particular case of irradiation-induced defects and microstructures. The center’s research thus includes modeling and measurement of thermal transport in oxide fuels with different levels of impurities, lattice disorder and irradiation-induced microstructure, as well as theoretical and experimental investigation of the evolution of disorder, stoichiometry and microstructure in nuclear fuel under irradiation. With the premise that thermal transport in irradiated UO2 is a phonon-mediated energy transport process in a crystalline material with defects and microstructure, a step-by-step approach will be utilized to understand the effects of types of defects and microstructures on the collective phonon dynamics in irradiated UO2. Our efforts under the thermal transport thrust involved both measurement of diffusive phonon transport (an approach that integrates over the entire phonon spectrum) and spectroscopic measurements of phonon attenuation/lifetime and phonon dispersion. Our distinct experimental efforts dovetail with our modeling effort involving atomistic simulation of phonon transport and prediction of lattice thermal conductivity using the Boltzmann transport framework.

  4. Combining density functional theory calculations, supercomputing, and data-driven methods to design new materials (Conference Presentation)

    Science.gov (United States)

    Jain, Anubhav

    2017-04-01

    Density functional theory (DFT) simulations solve for the electronic structure of materials starting from the Schrödinger equation. Many case studies have now demonstrated that researchers can often use DFT to design new compounds in the computer (e.g., for batteries, catalysts, and hydrogen storage) before synthesis and characterization in the lab. In this talk, I will focus on how DFT calculations can be executed on large supercomputing resources in order to generate very large data sets on new materials for functional applications. First, I will briefly describe the Materials Project, an effort at LBNL that has virtually characterized over 60,000 materials using DFT and has shared the results with over 17,000 registered users. Next, I will talk about how such data can help discover new materials, describing how preliminary computational screening led to the identification and confirmation of a new family of bulk AMX2 thermoelectric compounds with measured zT reaching 0.8. I will outline future plans for how such data-driven methods can be used to better understand the factors that control thermoelectric behavior, e.g., for the rational design of electronic band structures, in ways that are different from conventional approaches.

  5. Progress report of Cekmece Nuclear Research and Training Center for 1980

    International Nuclear Information System (INIS)

    1982-01-01

    Presented are the research works carried out in 1980 in Physics, Chemistry, Nuclear engineering, Radiobiology, Reactor operation and reactor enlargement, Health physics, Radioisotope production, Electronic, Industrial application of radioisotopes, Nuclear fuel technology, Technical services, Construction control, Publication and documentation, Training division of Cekmece Nuclear Research and Training Center

  6. Large scale computing in the Energy Research Programs

    International Nuclear Information System (INIS)

    1991-05-01

    The Energy Research Supercomputer Users Group (ERSUG) comprises all investigators using resources of the Department of Energy Office of Energy Research supercomputers. At the December 1989 meeting held at Florida State University (FSU), the ERSUG executive committee determined that the continuing rapid advances in computational sciences and computer technology demanded a reassessment of the role computational science should play in meeting DOE's commitments. Initial studies were to be performed for four subdivisions: (1) Basic Energy Sciences (BES) and Applied Mathematical Sciences (AMS), (2) Fusion Energy, (3) High Energy and Nuclear Physics, and (4) Health and Environmental Research. The first two subgroups produced formal subreports that provided a basis for several sections of this report. Additional information provided in the AMS/BES is included as Appendix C in an abridged form that eliminates most duplication. Additionally, each member of the executive committee was asked to contribute area-specific assessments; these assessments are included in the next section. In the following sections, brief assessments are given for specific areas, a conceptual model is proposed that the entire computational effort for energy research is best viewed as one giant nation-wide computer, and then specific recommendations are made for the appropriate evolution of the system

  7. Bibliography of Lewis Research Center technical publications announced in 1990

    Science.gov (United States)

    1991-01-01

    This compilation of abstracts describes and indexes the technical reporting that resulted from the scientific and engineering work performed and managed by the Lewis Research Center in 1990. All the publications were announced in the 1990 issues of STAR (Scientific and Technical Aerospace Reports) and/or IAA (International Aerospace Abstracts). Included are research reports, journal articles, conference presentations, patents and patent applications, and theses.

  8. Bibliography of Lewis Research Center technical publications announced in 1992

    Science.gov (United States)

    1993-01-01

    This compilation of abstracts describes and indexes the technical reporting that resulted from the scientific and engineering work performed and managed by the Lewis Research Center in 1992. All the publications were announced in the 1992 issues of STAR (Scientific and Technical Aerospace Reports) and/or IAA (International Aerospace Abstracts). Included are research reports, journal articles, conference presentations, patents and patent applications, and theses.

  9. Bibliography of Lewis Research Center technical publications announced in 1993

    Science.gov (United States)

    1994-01-01

    This compilation of abstracts describes and indexes the technical reporting that resulted from the scientific and engineering work performed and managed by the Lewis Research Center in 1993. All the publications were announced in the 1993 issues of STAR (Scientific and Technical Aerospace Reports) and/or IAA (International Aerospace Abstracts). Included are research reports, journal articles, conference presentations, patents and patent applications, and theses.

  10. Bibliography of Lewis Research Center technical publications announced in 1989

    Science.gov (United States)

    1990-01-01

    This compilation of abstracts describes and indexes the technical reporting that resulted from the scientific and engineering work performed and managed by the Lewis Research Center in 1989. All the publications were announced in the 1989 issues of STAR (Scientific and Technical Aerospace Reports) and/or IAA (International Aerospace Abstracts). Included are research reports, journal articles, conference presentations, patents and patent applications, and theses.

  11. NASA Langley Research Center tethered balloon systems

    Science.gov (United States)

    Owens, Thomas L.; Storey, Richard W.; Youngbluth, Otto

    1987-01-01

    The NASA Langley Research Center tethered balloon system operations are covered in this report for the period of 1979 through 1983. Meteorological data, ozone concentrations, and other data were obtained from in situ measurements. The large tethered balloon had a lifting capability of 30 kilograms to 2500 meters. The report includes descriptions of the various components of the balloon systems such as the balloons, the sensors, the electronics, and the hardware. Several photographs of the system are included as well as a list of projects including the types of data gathered.

  12. Ethics and Regulatory Challenges and Opportunities in Patient-Centered Comparative Effectiveness Research.

    Science.gov (United States)

    Sugarman, Jeremy

    2016-04-01

    The Affordable Care Act includes provisions for the conduct of large-scale, patient-centered comparative effectiveness research. Such efforts aim toward the laudable moral goal of having evidence to improve health care decision making. Nevertheless, these pragmatic clinical research efforts that typically pose minimal incremental risk and are enmeshed in routine care settings perhaps surprisingly encounter an array of ethics and regulatory challenges and opportunities for academic health centers. An emphasis on patient-centeredness forces an examination of the appropriateness of traditional methods used to protect the rights, interests, and welfare of participants. At the same time, meaningful collaboration with patients throughout the research process also necessitates ensuring that novel approaches to research (including recruitment and consent) entail necessary protections regarding such issues as privacy. As the scientific and logistical aspects of this research are being developed, substantial attention is being focused on the accompanying ethics and regulatory issues that have emerged, which should help to facilitate ethically appropriate research in a variety of contexts.

  13. [The Engineering and Technical Services Directorate at the Glenn Research Center

    Science.gov (United States)

    Moon, James

    2004-01-01

    My name is James Moon and I am a senior at Tennessee State University where my major is Aeronautical and Industrial Technology with a concentration in industrial electronics. I am currently serving my internship in the Engineering and Technical Services Directorate at the Glenn Research Center (GRC). The Engineering and Technical Service Directorate provides the services and infrastructure for the Glenn Research Center to take research concepts to reality. They provide a full range of integrated services including engineering, advanced prototyping and testing, facility management, and information technology for NASA, industry, and academia. Engineering and Technical Services contains the core knowledge in Information Technology (IT). This includes data systems and analysis, inter and intranet based systems design and data security. Including the design and development of embedded real-time s o h a r e applications for flight and supporting ground systems, Engineering and Technical Services provide a wide range of IT services and products specific to the Glenn Research Center research and engineering community. In the 7000 Directorate I work directly in the 7611 organization. This organization is known as the Aviation Environments Technical Branch. My mentor is Vincent Satterwhite who is also the Branch Chief of the Aviation Environments Technical Branch. In this branch, I serve as the Assistant program manager of the Engineering Technology Program. The Engineering Technology Program (ETP) is one of three components of the High School L.E.R.C.I.P. This is an Agency-sponsored, eight-week research-based apprenticeship program designed to attract traditionally underrepresented high school students that demonstrate an aptitude for and interest in mathematics, science, engineering, and technology.

  14. Magnetic fusion energy and computers. The role of computing in magnetic fusion energy research and development (second edition)

    International Nuclear Information System (INIS)

    1983-01-01

    This report documents the structure and uses of the MFE Network and presents a compilation of future computing requirements. Its primary emphasis is on the role of supercomputers in fusion research. One of its key findings is that with the introduction of each successive class of supercomputer, qualitatively improved understanding of fusion processes has been gained. At the same time, even the current Class VI machines severely limit the attainable realism of computer models. Many important problems will require the introduction of Class VII or even larger machines before they can be successfully attacked

  15. UC Merced Center for Computational Biology Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Colvin, Michael; Watanabe, Masakatsu

    2010-11-30

    made possible by the CCB from its inception until August, 2010, at the end of the final extension. Although DOE support for the center ended in August 2010, the CCB will continue to exist and support its original objectives. The research and academic programs fostered by the CCB have led to additional extramural funding from other agencies, and we anticipate that CCB will continue to provide support for quantitative and computational biology program at UC Merced for many years to come. Since its inception in fall 2004, CCB research projects have continuously had a multi-institutional collaboration with Lawrence Livermore National Laboratory (LLNL), and the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, as well as individual collaborators at other sites. CCB affiliated faculty cover a broad range of computational and mathematical research including molecular modeling, cell biology, applied math, evolutional biology, bioinformatics, etc. The CCB sponsored the first distinguished speaker series at UC Merced, which had an important role is spreading the word about the computational biology emphasis at this new campus. One of CCB's original goals is to help train a new generation of biologists who bridge the gap between the computational and life sciences. To archive this goal, by summer 2006, a new program - summer undergraduate internship program, have been established under CCB to train the highly mathematical and computationally intensive Biological Science researchers. By the end of summer 2010, 44 undergraduate students had gone through this program. Out of those participants, 11 students have been admitted to graduate schools and 10 more students are interested in pursuing graduate studies in the sciences. The center is also continuing to facilitate the development and dissemination of undergraduate and graduate course materials based on the latest research in computational biology.

  16. [Standardization of the terminology of the academic medical centers and biomedical research centers, in the English language, for journal article sending].

    Science.gov (United States)

    Hochman, Bernardo; Locali, Rafael Fagionato; Oliveira Filho, Renato Santos de; Oliveira, Ricardo Leão de; Goldenberg, Saul; Ferreira, Lydia Masako

    2006-01-01

    To suggest a standardization, in the English language, the formatting of the citation of the research centers. From three more recent publications of the first 20 journals available in Brazilian Portal of Scientific Information - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), with bigger factor of impact during the year of 2004, according of information in ISI Web of Knowledge Journal Citation Reports database in biennium 2004-2005, had extracted the formats of citations of the research centers. An analogy to the institutional hierarchie step of the Federal University of Sao Paulo (UNIFESP) was carried out, and the formats most frequent, in the English language, had been adopted as standard to be suggested to cite the research centers for sending articles. In relation to the citation "Departamento", was standardized "Department of ..." (being "..." the name in English of the Department), to the citation "Programa de Pós-Graduação" "... Program", "Disciplina" "Division of ...", "Orgãos, Grupos e Associações" "... Group ", "Setor" "Section of...", "Centro" "Center for ...", "Unidade" "... Unit ", "Instituto" "Institute of ...", "Laboratório" "Laboratory of ..." and "Grupo" "Group of ...".

  17. The Wetland and Aquatic Research Center strategic science plan

    Science.gov (United States)

    ,

    2017-02-02

    IntroductionThe U.S. Geological Survey (USGS) Wetland and Aquatic Research Center (WARC) has two primary locations (Gainesville, Florida, and Lafayette, Louisiana) and field stations throughout the southeastern United States and Caribbean. WARC’s roots are in U.S. Fish and Wildlife Service (USFWS) and National Park Service research units that were brought into the USGS as the Biological Research Division in 1996. Founded in 2015, WARC was created from the merger of two long-standing USGS biology science Centers—the Southeast Ecological Science Center and the National Wetlands Research Center—to bring together expertise in biology, ecology, landscape science, geospatial applications, and decision support in order to address issues nationally and internationally. WARC scientists apply their expertise to a variety of wetland and aquatic research and monitoring issues that require coordinated, integrated efforts to better understand natural environments. By increasing basic understanding of the biology of important species and broader ecological and physiological processes, this research provides information to policymakers and aids managers in their stewardship of natural resources and in regulatory functions.This strategic science plan (SSP) was developed to guide WARC research during the next 5–10 years in support of Department of the Interior (DOI) partnering bureaus such as the USFWS, the National Park Service, and the Bureau of Ocean Energy Management, as well as other Federal, State, and local natural resource management agencies. The SSP demonstrates the alignment of the WARC goals with the USGS mission areas, associated programs, and other DOI initiatives. The SSP is necessary for workforce planning and, as such, will be used as a guide for future needs for personnel. The SSP also will be instrumental in developing internal funding priorities and in promoting WARC’s capabilities to both external cooperators and other groups within the USGS.

  18. The Brain Takes Center Stage at 2014 NIH Research Festival | Poster

    Science.gov (United States)

    By Andrea Frydl, Contributing Writer The 2014 NIH Research Festival, Sept. 22–24, focused on the human brain for two, very specific, reasons: to coincide with the White House BRAIN Initiative and to highlight the John Edward Porter Neuroscience Research Center, which opened earlier this year on the NIH campus.

  19. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    Science.gov (United States)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are

  20. Applied wind energy research at the National Wind Technology Center

    International Nuclear Information System (INIS)

    Robinson, M.C.; Tu, P.

    1997-01-01

    Applied research activities currently being undertaken at the National Wind Technology Center, part of the National Renewable Energy Laboratory, in the United States, are divided into several technical disciplines. An integrated multi-disciplinary approach is urged for the future in order to evaluate advanced turbine designs. The risk associated with any new turbine development program can thus be mitigated through the provision of the advanced technology, analysis tools and innovative designs available at the Center, and wind power can be promoted as a viable renewable energy alternative. (UK)

  1. Center for Ecotoxicological Research of Montenegro

    International Nuclear Information System (INIS)

    Vucinic, Z.

    2006-01-01

    PI Center for Ecotoxicological Research of Montenegro (CETI) is founded 1996's in accordance with Government policy, for the purpose to: Unite the problems of protecting the environment in one institution, Organize the monitoring of the all segments of environment (air, waters soils, waste, ionizing and non-ionizing radiation, noise measurements etc.), Organize control of human and animal food and toxicological analysis of all kind of samples, forensic analyses etc. To concentrate the expensive instrumental equipment and human resources in one institution. December 1996 - CETI founded by decision of Montenegrin government 1997-CETI starting with acquisition of equipment and education of the staff March of 1998 - Officially starting with the job and realization with Program's September 2004 - Took the ISO 9001:2000 Certificate and Accreditation under ISO/IEC 17025 in November 2004 Organisation Scheme of CETI: Laboratory For Ecotoxicological Research And Radiation Protection I. Department For Laboratory Diagnostic And Monitoring II. Department For Radiation Protection And Monitoring Sector For Administration Department For Economy Department For Administration Total number of Employs is 63 of permanent staff

  2. Translational Partnership Development Lead | Center for Cancer Research

    Science.gov (United States)

    PROGRAM DESCRIPTION The Frederick National Laboratory for Cancer Research (FNLCR) is a Federally Funded Research and Development Center operated by Leidos Biomedical Research, Inc on behalf of the National Cancer Institute (NCI). The staff of FNLCR support the NCI’s mission in the fight against cancer and HIV/AIDS. Currently we are seeking a Translational Partnership Development Lead (TPDL) who will work closely with the Office of Translational Resources (OTR) within the Office of the Director (OD) of NCI’s Center for Cancer Research (CCR) to facilitate the successful translation of CCR’s basic and preclinical research advances into new therapeutics and diagnostics. The TPDL with be strategically aligned within FNLCR’s Partnership Development Office (PDO), to maximally leverage the critical mass of expertise available within the PDO. CCR comprises the basic and clinical components of the NCI’s Intramural Research Program (IRP) and consists of ~230 basic and clinical Investigators located at either the NIH main campus in Bethesda or the NCI-Frederick campus. CCR Investigators are focused primarily on cancer and HIV/AIDS, with special emphasis on the most challenging and important high-risk/high-reward problems driving the fields. (See https://ccr.cancer.gov for a full delineation of CCR Investigators and their research activities.) The process of developing research findings into new clinical applications is high risk, complex, variable, and requires multiple areas of expertise seldom available within the confines of a single Investigator’s laboratory. To accelerate this process, OTR serves as a unifying force within CCR for all aspects of translational activities required to achieve success and maintain timely progress. A key aspect of OTR’s function is to develop and strengthen essential communications and collaborations within NIH, with extramural partners and with industry to bring together experts in chemistry, human subjects research

  3. Climate Change and Vector Borne Diseases on NASA Langley Research Center

    Science.gov (United States)

    Cole, Stuart K.; DeYoung, Russell J.; Shepanek, Marc A.; Kamel, Ahmed

    2014-01-01

    Increasing global temperature, weather patterns with above average storm intensities, and higher sea levels have been identified as phenomena associated with global climate change. As a causal system, climate change could contribute to vector borne diseases in humans. Vectors of concern originate from the vicinity of Langley Research Center include mosquitos and ticks that transmit disease that originate regionally, nationwide, or from outside the US. Recognizing changing conditions, vector borne diseases propagate under climate change conditions, and understanding the conditions in which they may exist or propagate, presents opportunities for monitoring their progress and mitigating their potential impacts through communication, continued monitoring, and adaptation. Personnel comprise a direct and fundamental support to NASA mission success, continuous and improved understanding of climatic conditions, and the resulting consequence of disease from these conditions, helps to reduce risk in terrestrial space technologies, ground operations, and space research. This research addresses conditions which are attributed to climatic conditions which promote environmental conditions conducive to the increase of disease vectors. This investigation includes evaluation of local mosquito population count and rainfall data for statistical correlation and identification of planning recommendations unique to LaRC, other NASA Centers to assess adaptation approaches, Center-level planning strategies.

  4. Flow Cytometry Technician | Center for Cancer Research

    Science.gov (United States)

    PROGRAM DESCRIPTION The Basic Science Program (BSP) pursues independent, multidisciplinary research in basic and applied molecular biology, immunology, retrovirology, cancer biology, and human genetics. Research efforts and support are an integral part of the Center for Cancer Research (CCR) at the Frederick National Laboratory for Cancer Research (FNLCR). KEY ROLES/RESPONSIBILITIES The Flow Cytometry Core (Flow Core) of the Cancer and Inflammation Program (CIP) is a service core which supports the research efforts of the CCR by providing expertise in the field of flow cytometry (using analyzers and sorters) with the goal of gaining a more thorough understanding of the biology of cancer and cancer cells. The Flow Core provides service to 12-15 CIP laboratories and more than 22 non-CIP laboratories. Flow core staff provide technical advice on the experimental design of applications, which include immunological phenotyping, cell function assays, and cell cycle analysis. Work is performed per customer requirements, and no independent research is involved. The Flow Cytometry Technician will be responsible for: Monitor performance of and maintain high dimensional flow cytometer analyzers and cell sorters Operate high dimensional flow cytometer analyzers and cell sorters Monitoring lab supply levels and order lab supplies, perform various record keeping responsibilities Assist in the training of scientific end users on the use of flow cytometry in their research, as well as how to operate and troubleshoot the bench-top analyzer instruments Experience with sterile technique and tissue culture

  5. Scholarly Citadel in Chicago: The Center for Research Libraries.

    Science.gov (United States)

    Boylan, Ray

    1979-01-01

    The Center provides access to infrequently used research materials in three interrelated ways: (1) it provides a deposit library for such materials from the collections of member libraries; (2) it acquires such materials at members' shared expense and for their common use; and (3) it provides rapid access to its collection materials. (Author/JD)

  6. Primary Care Research in the Patient-Centered Outcomes Research Institute's Portfolio.

    Science.gov (United States)

    Selby, Joe V; Slutsky, Jean R

    2016-04-01

    In their article in this issue, Mazur and colleagues analyze the characteristics of early recipients of funding from the Patient-Centered Outcomes Research Institute (PCORI). Mazur and colleagues note correctly that PCORI has a unique purpose and mission and suggest that it should therefore have a distinct portfolio of researchers and departments when compared with other funders such as the National Institutes of Health (NIH). Responding on behalf of PCORI, the authors of this Commentary agree with the characterization of PCORI's mission as distinct from that of NIH and others. They agree too that data found on PCORI's Web site demonstrate that PCORI's portfolio of researchers and departments is more diverse and more heavily populated with clinician researchers, as would be expected. The authors take issue with Mazur and colleagues' suggestion that because half of clinical visits occur within primary care settings, half of PCORI's funded research should be based in primary care departments. PCORI's portfolio reflects what patients and others tell PCORI are the critical questions. Many of these do, in fact, occur with more complex conditions in specialty care. The authors question whether the research of primary care departments is too narrowly focused and whether it sufficiently considers study of these complex conditions. Research on more complex conditions including heart failure, coronary artery disease, and multiple comorbid conditions could be highly valuable when approached from the primary care perspective, where many of the comparative effectiveness questions first arise.

  7. Recipients of Regional Centers of Research Excellence (RCREs) P20 Grant Awards Announced

    Science.gov (United States)

    NCI, Center for Global Health (CGH) release of the applications represents novel global collaborations charged with planning and designing sustainable, Regional Centers of Research Excellence (RCREs) for non-communicable diseases, including cancer, in low- and middle-income countries (LMICs) or regions.

  8. ADVANCED COMPOSITES TECHNOLOGY CASE STUDY AT NASA LANGLEY RESEARCH CENTER

    Science.gov (United States)

    This report summarizes work conducted at the National Aeronautics and Space Administration's Langley Research Center (NASA-LaRC) in Hampton, VA, under the U.S. Environmental Protection Agency’s (EPA) Waste Reduction Evaluations at Federal Sites (WREAFS) Program. Support for...

  9. The Rise of Federally Funded Research and Development Centers

    Energy Technology Data Exchange (ETDEWEB)

    DALE,BRUCE C.; MOY,TIMOTHY D.

    2000-09-01

    Federally funded research and development centers (FFRDCS) area unique class of research and development (R and D) facilities that share aspects of private and public ownership. Some FFRDCS have been praised as national treasures, but FFRDCS have also been the focus of much criticism through the years. This paper traces the history of FFRDCS through four periods: (1) the World War II era, which saw the birth of federal R and D centers that would eventually become FFRDCS; (2) the early Cold War period, which exhibited a proliferation of FFRDCS despite their unclear legislative status and growing tension with an increasingly capable and assertive defense industry, (3) there-evaluation and retrenchment of FFRDCS in the 1960s and early 1970s, which resulted in a dramatic decline in the number of FFRDCS; and (4) the definition and codification of the FFRDC entity in the late 1970s and 1980s, when Congress and the executive branch worked together to formalize regulations to control FFRDCS. The paper concludes with observations on the status of FFRDCS at the end of the twentieth century.

  10. Sixth NASA Glenn Research Center Propulsion Control and Diagnostics (PCD) Workshop

    Science.gov (United States)

    Litt, Jonathan S. (Compiler)

    2018-01-01

    The Intelligent Control and Autonomy Branch at NASA Glenn Research Center hosted the Sixth Propulsion Control and Diagnostics Workshop on August 22-24, 2017. The objectives of this workshop were to disseminate information about research being performed in support of NASA Aeronautics programs; get feedback from peers on the research; and identify opportunities for collaboration. There were presentations and posters by NASA researchers, Department of Defense representatives, and engine manufacturers on aspects of turbine engine modeling, control, and diagnostics.

  11. Academic Centers and/as Industrial Consortia: US Microelectronics Research 1976-2016

    NARCIS (Netherlands)

    Mody, Cyrus C.M.

    2017-01-01

    In the U.S., in the late 1970s and early 1980s, academic research centers that were tightly linked to the semiconductor industry began to proliferate – at exactly the same time as the first academic start-up companies in biotech, and slightly before the first U.S. industrial semiconductor research

  12. Energy Frontier Research Centers: A View from Senior EFRC Representatives (2011 EFRC Summit, panel session)

    International Nuclear Information System (INIS)

    Drell, Persis; Armstrong, Neal; Carter, Emily; DePaolo, Don; Gunnoe, Brent

    2011-01-01

    A distinguished panel of scientists from the EFRC community provide their perspective on the importance of EFRCs for addressing critical energy needs at the 2011 EFRC Summit. Persis Drell, Director at SLAC, served as moderator. Panel members are Neal Armstrong (Director of the Center for Interface Science: Solar Electric Materials, led by the University of Arizona), Emily Carter (Co-Director of the Combustion EFRC, led by Princeton University. She is also Team Leader of the Heterogeneous Functional Materials Center, led by the University of South Carolina), Don DePaolo (Director of the Center for Nanoscale Control of Geologic CO2, led by LBNL), and Brent Gunnoe (Director of the Center for Catalytic Hydrocarbon Functionalization, led by the University of Virginia). The 2011 EFRC Summit and Forum brought together the EFRC community and science and policy leaders from universities, national laboratories, industry and government to discuss 'Science for our Nation's Energy Future.' In August 2009, the Office of Science established 46 Energy Frontier Research Centers. The EFRCs are collaborative research efforts intended to accelerate high-risk, high-reward fundamental research, the scientific basis for transformative energy technologies of the future. These Centers involve universities, national laboratories, nonprofit organizations, and for-profit firms, singly or in partnerships, selected by scientific peer review. They are funded at $2 to $5 million per year for a total planned DOE commitment of $777 million over the initial five-year award period, pending Congressional appropriations. These integrated, multi-investigator Centers are conducting fundamental research focusing on one or more of several 'grand challenges' and use-inspired 'basic research needs' recently identified in major strategic planning efforts by the scientific community. The purpose of the EFRCs is to integrate the talents and expertise of leading scientists in a setting designed to accelerate

  13. Innovation in Flight: Research of the NASA Langley Research Center on Revolutionary Advanced Concepts for Aeronautics

    Science.gov (United States)

    Chambers, Joseph R.

    2005-01-01

    The goal of this publication is to provide an overview of the topic of revolutionary research in aeronautics at Langley, including many examples of research efforts that offer significant potential benefits, but have not yet been applied. The discussion also includes an overview of how innovation and creativity is stimulated within the Center, and a perspective on the future of innovation. The documentation of this topic, especially the scope and experiences of the example research activities covered, is intended to provide background information for future researchers.

  14. Implementation of a virtual link between power system testbeds at Marshall Spaceflight Center and Lewis Research Center

    Science.gov (United States)

    Doreswamy, Rajiv

    1990-01-01

    The Marshall Space Flight Center (MSFC) owns and operates a space station module power management and distribution (SSM-PMAD) testbed. This system, managed by expert systems, is used to analyze and develop power system automation techniques for Space Station Freedom. The Lewis Research Center (LeRC), Cleveland, Ohio, has developed and implemented a space station electrical power system (EPS) testbed. This system and its power management controller are representative of the overall Space Station Freedom power system. A virtual link is being implemented between the testbeds at MSFC and LeRC. This link would enable configuration of SSM-PMAD as a load center for the EPS testbed at LeRC. This connection will add to the versatility of both systems, and provide an environment of enhanced realism for operation of both testbeds.

  15. Computational fluid dynamics: complex flows requiring supercomputers. January 1975-July 1988 (Citations from the INSPEC: Information Services for the Physics and Engineering Communities data base). Report for January 1975-July 1988

    International Nuclear Information System (INIS)

    1988-08-01

    This bibliography contains citations concerning computational fluid dynamics (CFD), a new method in computational science to perform complex flow simulations in three dimensions. Applications include aerodynamic design and analysis for aircraft, rockets, and missiles, and automobiles; heat-transfer studies; and combustion processes. Included are references to supercomputers, array processors, and parallel processors where needed for complete, integrated design. Also included are software packages and grid-generation techniques required to apply CFD numerical solutions. Numerical methods for fluid dynamics, not requiring supercomputers, are found in a separate published search. (Contains 83 citations fully indexed and including a title list.)

  16. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  17. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu

    2013-07-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  18. NASA University Research Centers Technical Advances in Education, Aeronautics, Space, Autonomy, Earth and Environment

    Science.gov (United States)

    Jamshidi, M. (Editor); Lumia, R. (Editor); Tunstel, E., Jr. (Editor); White, B. (Editor); Malone, J. (Editor); Sakimoto, P. (Editor)

    1997-01-01

    This first volume of the Autonomous Control Engineering (ACE) Center Press Series on NASA University Research Center's (URC's) Advanced Technologies on Space Exploration and National Service constitute a report on the research papers and presentations delivered by NASA Installations and industry and Report of the NASA's fourteen URC's held at the First National Conference in Albuquerque, New Mexico from February 16-19, 1997.

  19. PROCEEDINGS OF RIKEN BNL RESEARCH CENTER WORKSHOP, VOLUME 66

    International Nuclear Information System (INIS)

    OGAWA, A.

    2005-01-01

    The RIKEN BNL Research Center (RSRC) was established in April 1997 at Brookhaven National Laboratory. It is funded by the 'Rikagaku Kenkyusho (RIKEN, The Institute of Physical and Chemical Research) of Japan. The Center is dedicated to the study of strong interactions, including spin physics, lattice QCD, and RHIC physics through the nurturing of a new generation of young physicists. The RBRC has both a theory and experimental component. At present the theoretical group has 4 Fellows and 3 Research Associates as well as 11 RHIC Physics/University Fellows (academic year 2003-2004). To date there are approximately 30 graduates from the program of which 13 have attained tenure positions at major institutions worldwide. The experimental group is smaller and has 2 Fellows and 3 RHIC Physics/University Fellows and 3 Research Associates, and historically 6 individuals have attained permanent positions. Beginning in 2001 a new RIKEN Spin Program (RSP) category was implemented at RBRC. These appointments are joint positions of RBRC and RIKEN and include the following positions in theory and experiment: RSP Researchers, RSP Research Associates, and Young Researchers, who are mentored by senior RBRC Scientists, A number of RIKEN Jr. Research Associates and Visiting Scientists also contribute to the physics program at the Center. RBRC has an active workshop program on strong interaction physics with each workshop focused on a specific physics problem. Each workshop speaker is encouraged to select a few of the most important transparencies from his or her presentation, accompanied by a page of explanation. This material is collected at the end of the workshop by the organizer to form proceedings, which can therefore be available within a short time. To date there are sixty nine proceedings volumes available. The construction of a 0.6 teraflops parallel processor, dedicated to lattice QCD, begun at the Center on February 19, 1998, was completed on August 28, 1998 and is still

  20. Massachusetts Institute of Technology, Plasma Fusion Center, Technical Research Programs

    International Nuclear Information System (INIS)

    1980-08-01

    A review is given of the technical programs carried out by the Plasma Fusion Center. The major divisions of work areas are applied plasma research, confinement experiments, fusion technology and engineering, and fusion systems. Some objectives and results of each program are described