WorldWideScience

Sample records for future supercomputer environments

  1. Supercomputing - Use Cases, Advances, The Future (2/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the second day, we will focus on software and software paradigms driving supercomputers, workloads that need supercomputing treatment, advances in technology and possible future developments. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and i...

  2. Cooperative visualization and simulation in a supercomputer environment

    International Nuclear Information System (INIS)

    Ruehle, R.; Lang, U.; Wierse, A.

    1993-01-01

    The article takes a closer look on the requirements being imposed by the idea to integrate all the components into a homogeneous software environment. To this end several methods for the distribtuion of applications in dependence of certain problem types are discussed. The currently available methods at the University of Stuttgart Computer Center for the distribution of applications are further explained. Finally the aims and characteristics of a European sponsored project, called PAGEIN, are explained, which fits perfectly into the line of developments at RUS. The aim of the project is to experiment with future cooperative working modes of aerospace scientists in a high speed distributed supercomputing environment. Project results will have an impact on the development of real future scientific application environments. (orig./DG)

  3. Supercomputing - Use Cases, Advances, The Future (1/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the first day, we will focus on the history and theory of supercomputing, the top500 list and the hardware that makes supercomputers tick. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP an...

  4. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  5. The ETA10 supercomputer system

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, Inc. ETA 10 is a next-generation supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamics MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed. (orig.)

  6. Supercomputers and the future of computational atomic scattering physics

    International Nuclear Information System (INIS)

    Younger, S.M.

    1989-01-01

    The advent of the supercomputer has opened new vistas for the computational atomic physicist. Problems of hitherto unparalleled complexity are now being examined using these new machines, and important connections with other fields of physics are being established. This talk briefly reviews some of the most important trends in computational scattering physics and suggests some exciting possibilities for the future. 7 refs., 2 figs

  7. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee

    2011-11-15

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  8. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee; Kaushik, Dinesh; Winfer, Andrew

    2011-01-01

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  9. The ETA systems plans for supercomputers

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, is a class VII supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamic MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed

  10. Supercomputers Of The Future

    Science.gov (United States)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  11. Supercomputational science

    CERN Document Server

    Wilson, S

    1990-01-01

    In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School ...

  12. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  13. Computational Dimensionalities of Global Supercomputing

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2013-12-01

    Full Text Available This Invited Paper pertains to subject of my Plenary Keynote Speech at the 17th World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI 2013 held in Orlando, Florida on July 9-12, 2013. The title of my Plenary Keynote Speech was: "Dimensionalities of Computation: from Global Supercomputing to Data, Text and Web Mining" but this Invited Paper will focus only on the "Computational Dimensionalities of Global Supercomputing" and is based upon a summary of the contents of several individual articles that have been previously written with myself as lead author and published in [75], [76], [77], [78], [79], [80] and [11]. The topics of these of the Plenary Speech included Overview of Current Research in Global Supercomputing [75], Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing [76], Data Mining Supercomputing with SAS™ JMP® Genomics ([77], [79], [80], and Visualization by Supercomputing Data Mining [81]. ______________________ [11.] Committee on the Future of Supercomputing, National Research Council (2003, The Future of Supercomputing: An Interim Report, ISBN-13: 978-0-309-09016- 2, http://www.nap.edu/catalog/10784.html [75.] Segall, Richard S.; Zhang, Qingyu and Cook, Jeffrey S.(2013, "Overview of Current Research in Global Supercomputing", Proceedings of Forty- Fourth Meeting of Southwest Decision Sciences Institute (SWDSI, Albuquerque, NM, March 12-16, 2013. [76.] Segall, Richard S. and Zhang, Qingyu (2010, "Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing", Proceedings of 5th INFORMS Workshop on Data Mining and Health Informatics, Austin, TX, November 6, 2010. [77.] Segall, Richard S., Zhang, Qingyu and Pierce, Ryan M.(2010, "Data Mining Supercomputing with SAS™ JMP®; Genomics: Research-in-Progress, Proceedings of 2010 Conference on Applied Research in Information Technology, sponsored by

  14. OpenMP Performance on the Columbia Supercomputer

    Science.gov (United States)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  15. What is supercomputing ?

    International Nuclear Information System (INIS)

    Asai, Kiyoshi

    1992-01-01

    Supercomputing means the high speed computation using a supercomputer. Supercomputers and the technical term ''supercomputing'' have spread since ten years ago. The performances of the main computers installed so far in Japan Atomic Energy Research Institute are compared. There are two methods to increase computing speed by using existing circuit elements, parallel processor system and vector processor system. CRAY-1 is the first successful vector computer. Supercomputing technology was first applied to meteorological organizations in foreign countries, and to aviation and atomic energy research institutes in Japan. The supercomputing for atomic energy depends on the trend of technical development in atomic energy, and the contents are divided into the increase of computing speed in existing simulation calculation and the acceleration of the new technical development of atomic energy. The examples of supercomputing in Japan Atomic Energy Research Institute are reported. (K.I.)

  16. Desktop supercomputer: what can it do?

    Science.gov (United States)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-12-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  17. Visualization environment of the large-scale data of JAEA's supercomputer system

    Energy Technology Data Exchange (ETDEWEB)

    Sakamoto, Kensaku [Japan Atomic Energy Agency, Center for Computational Science and e-Systems, Tokai, Ibaraki (Japan); Hoshi, Yoshiyuki [Research Organization for Information Science and Technology (RIST), Tokai, Ibaraki (Japan)

    2013-11-15

    On research and development of various fields of nuclear energy, visualization of calculated data is especially useful to understand the result of simulation in an intuitive way. Many researchers who run simulations on the supercomputer in Japan Atomic Energy Agency (JAEA) are used to transfer calculated data files from the supercomputer to their local PCs for visualization. In recent years, as the size of calculated data has gotten larger with improvement of supercomputer performance, reduction of visualization processing time as well as efficient use of JAEA network is being required. As a solution, we introduced a remote visualization system which has abilities to utilize parallel processors on the supercomputer and to reduce the usage of network resources by transferring data of intermediate visualization process. This paper reports a study on the performance of image processing with the remote visualization system. The visualization processing time is measured and the influence of network speed is evaluated by varying the drawing mode, the size of visualization data and the number of processors. Based on this study, a guideline for using the remote visualization system is provided to show how the system can be used effectively. An upgrade policy of the next system is also shown. (author)

  18. Desktop supercomputer: what can it do?

    International Nuclear Information System (INIS)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-01-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  19. Adaptability of supercomputers to nuclear computations

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Ishiguro, Misako; Matsuura, Toshihiko.

    1983-01-01

    Recently in the field of scientific and technical calculation, the usefulness of supercomputers represented by CRAY-1 has been recognized, and they are utilized in various countries. The rapid computation of supercomputers is based on the function of vector computation. The authors investigated the adaptability to vector computation of about 40 typical atomic energy codes for the past six years. Based on the results of investigation, the adaptability of the function of vector computation that supercomputers have to atomic energy codes, the problem regarding the utilization and the future prospect are explained. The adaptability of individual calculation codes to vector computation is largely dependent on the algorithm and program structure used for the codes. The change to high speed by pipeline vector system, the investigation in the Japan Atomic Energy Research Institute and the results, and the examples of expressing the codes for atomic energy, environmental safety and nuclear fusion by vector are reported. The magnification of speed up for 40 examples was from 1.5 to 9.0. It can be said that the adaptability of supercomputers to atomic energy codes is fairly good. (Kako, I.)

  20. ATLAS Software Installation on Supercomputers

    CERN Document Server

    Undrus, Alexander; The ATLAS collaboration

    2018-01-01

    PowerPC and high performance computers (HPC) are important resources for computing in the ATLAS experiment. The future LHC data processing will require more resources than Grid computing, currently using approximately 100,000 cores at well over 100 sites, can provide. Supercomputers are extremely powerful as they use resources of hundreds of thousands CPUs joined together. However their architectures have different instruction sets. ATLAS binary software distributions for x86 chipsets do not fit these architectures, as emulation of these chipsets results in huge performance loss. This presentation describes the methodology of ATLAS software installation from source code on supercomputers. The installation procedure includes downloading the ATLAS code base as well as the source of about 50 external packages, such as ROOT and Geant4, followed by compilation, and rigorous unit and integration testing. The presentation reports the application of this procedure at Titan HPC and Summit PowerPC at Oak Ridge Computin...

  1. Introduction to Reconfigurable Supercomputing

    CERN Document Server

    Lanzagorta, Marco; Rosenberg, Robert

    2010-01-01

    This book covers technologies, applications, tools, languages, procedures, advantages, and disadvantages of reconfigurable supercomputing using Field Programmable Gate Arrays (FPGAs). The target audience is the community of users of High Performance Computers (HPe who may benefit from porting their applications into a reconfigurable environment. As such, this book is intended to guide the HPC user through the many algorithmic considerations, hardware alternatives, usability issues, programming languages, and design tools that need to be understood before embarking on the creation of reconfigur

  2. NASA Advanced Supercomputing Facility Expansion

    Science.gov (United States)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  3. Japanese supercomputer technology

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Ewald, R.H.; Worlton, W.J.

    1982-01-01

    In February 1982, computer scientists from the Los Alamos National Laboratory and Lawrence Livermore National Laboratory visited several Japanese computer manufacturers. The purpose of these visits was to assess the state of the art of Japanese supercomputer technology and to advise Japanese computer vendors of the needs of the US Department of Energy (DOE) for more powerful supercomputers. The Japanese foresee a domestic need for large-scale computing capabilities for nuclear fusion, image analysis for the Earth Resources Satellite, meteorological forecast, electrical power system analysis (power flow, stability, optimization), structural and thermal analysis of satellites, and very large scale integrated circuit design and simulation. To meet this need, Japan has launched an ambitious program to advance supercomputer technology. This program is described

  4. An assessment of worldwide supercomputer usage

    Energy Technology Data Exchange (ETDEWEB)

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  5. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    International Nuclear Information System (INIS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-01-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers. (paper)

  6. Status of supercomputers in the US

    International Nuclear Information System (INIS)

    Fernbach, S.

    1985-01-01

    Current Supercomputers; that is, the Class VI machines which first became available in 1976 are being delivered in greater quantity than ever before. In addition, manufacturers are busily working on Class VII machines to be ready for delivery in CY 1987. Mainframes are being modified or designed to take on some features of the supercomputers and new companies with the intent of either competing directly in the supercomputer arena or in providing entry-level systems from which to graduate to supercomputers are springing up everywhere. Even well founded organizations like IBM and CDC are adding machines with vector instructions in their repertoires. Japanese - manufactured supercomputers are also being introduced into the U.S. Will these begin to compete with those of U.S. manufacture. Are they truly competitive. It turns out that both from the hardware and software points of view they may be superior. We may be facing the same problems in supercomputers that we faced in videosystems

  7. Supercomputing and related national projects in Japan

    International Nuclear Information System (INIS)

    Miura, Kenichi

    1985-01-01

    Japanese supercomputer development activities in the industry and research projects are outlined. Architecture, technology, software, and applications of Fujitsu's Vector Processor Systems are described as an example of Japanese supercomputers. Applications of supercomputers to high energy physics are also discussed. (orig.)

  8. TOP500 Supercomputers for June 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  9. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver; Wu, Kesheng

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports to slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.

  10. TOP500 Supercomputers for June 2005

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  11. Graphics supercomputer for computational fluid dynamics research

    Science.gov (United States)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  12. TOP500 Supercomputers for November 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  13. A training program for scientific supercomputing users

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  14. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  15. INTEL: Intel based systems move up in supercomputing ranks

    CERN Multimedia

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  16. World's fastest supercomputer opens up to users

    Science.gov (United States)

    Xin, Ling

    2016-08-01

    China's latest supercomputer - Sunway TaihuLight - has claimed the crown as the world's fastest computer according to the latest TOP500 list, released at the International Supercomputer Conference in Frankfurt in late June.

  17. Centralized supercomputer support for magnetic fusion energy research

    International Nuclear Information System (INIS)

    Fuss, D.; Tull, G.G.

    1984-01-01

    High-speed computers with large memories are vital to magnetic fusion energy research. Magnetohydrodynamic (MHD), transport, equilibrium, Vlasov, particle, and Fokker-Planck codes that model plasma behavior play an important role in designing experimental hardware and interpreting the resulting data, as well as in advancing plasma theory itself. The size, architecture, and software of supercomputers to run these codes are often the crucial constraints on the benefits such computational modeling can provide. Hence, vector computers such as the CRAY-1 offer a valuable research resource. To meet the computational needs of the fusion program, the National Magnetic Fusion Energy Computer Center (NMFECC) was established in 1974 at the Lawrence Livermore National Laboratory. Supercomputers at the central computing facility are linked to smaller computer centers at each of the major fusion laboratories by a satellite communication network. In addition to providing large-scale computing, the NMFECC environment stimulates collaboration and the sharing of computer codes and data among the many fusion researchers in a cost-effective manner

  18. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  19. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-05-15

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation.

  20. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    International Nuclear Information System (INIS)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho

    2008-01-01

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation

  1. The Pawsey Supercomputer geothermal cooling project

    Science.gov (United States)

    Regenauer-Lieb, K.; Horowitz, F.; Western Australian Geothermal Centre Of Excellence, T.

    2010-12-01

    The Australian Government has funded the Pawsey supercomputer in Perth, Western Australia, providing computational infrastructure intended to support the future operations of the Australian Square Kilometre Array radiotelescope and to boost next-generation computational geosciences in Australia. Supplementary funds have been directed to the development of a geothermal exploration well to research the potential for direct heat use applications at the Pawsey Centre site. Cooling the Pawsey supercomputer may be achieved by geothermal heat exchange rather than by conventional electrical power cooling, thus reducing the carbon footprint of the Pawsey Centre and demonstrating an innovative green technology that is widely applicable in industry and urban centres across the world. The exploration well is scheduled to be completed in 2013, with drilling due to commence in the third quarter of 2011. One year is allocated to finalizing the design of the exploration, monitoring and research well. Success in the geothermal exploration and research program will result in an industrial-scale geothermal cooling facility at the Pawsey Centre, and will provide a world-class student training environment in geothermal energy systems. A similar system is partially funded and in advanced planning to provide base-load air-conditioning for the main campus of the University of Western Australia. Both systems are expected to draw ~80-95 degrees C water from aquifers lying between 2000 and 3000 meters depth from naturally permeable rocks of the Perth sedimentary basin. The geothermal water will be run through absorption chilling devices, which only require heat (as opposed to mechanical work) to power a chilled water stream adequate to meet the cooling requirements. Once the heat has been removed from the geothermal water, licensing issues require the water to be re-injected back into the aquifer system. These systems are intended to demonstrate the feasibility of powering large-scale air

  2. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    AUTHOR|(SzGeCERN)643806; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Wenaus, Torre

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for jo...

  3. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  4. TOP500 Supercomputers for November 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  5. TOP500 Supercomputers for June 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  6. TOP500 Supercomputers for June 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  7. Status reports of supercomputing astrophysics in Japan

    International Nuclear Information System (INIS)

    Nakamura, Takashi; Nagasawa, Mikio

    1990-01-01

    The Workshop on Supercomputing Astrophysics was held at National Laboratory for High Energy Physics (KEK, Tsukuba) from August 31 to September 2, 1989. More than 40 participants of physicists, astronomers were attendant and discussed many topics in the informal atmosphere. The main purpose of this workshop was focused on the theoretical activities in computational astrophysics in Japan. It was also aimed to promote effective collaboration between the numerical experimentists working on supercomputing technique. The various subjects of the presented papers of hydrodynamics, plasma physics, gravitating systems, radiative transfer and general relativity are all stimulating. In fact, these numerical calculations become possible now in Japan owing to the power of Japanese supercomputer such as HITAC S820, Fujitsu VP400E and NEC SX-2. (J.P.N.)

  8. Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data

    Science.gov (United States)

    Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.

    2018-03-01

    One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.

  9. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  10. Comments on the parallelization efficiency of the Sunway TaihuLight supercomputer

    OpenAIRE

    Végh, János

    2016-01-01

    In the world of supercomputers, the large number of processors requires to minimize the inefficiencies of parallelization, which appear as a sequential part of the program from the point of view of Amdahl's law. The recently suggested new figure of merit is applied to the recently presented supercomputer, and the timeline of "Top 500" supercomputers is scrutinized using the metric. It is demonstrated, that in addition to the computing performance and power consumption, the new supercomputer i...

  11. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to

  12. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  13. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Esteban [University of Pittsburgh; Ni, Xiang [University of Illinois at Urbana-Champaign; Jones, Terry R [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  14. Integration of Panda Workload Management System with supercomputers

    Science.gov (United States)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  15. Applications of supercomputing and the utility industry: Calculation of power transfer capabilities

    International Nuclear Information System (INIS)

    Jensen, D.D.; Behling, S.R.; Betancourt, R.

    1990-01-01

    Numerical models and iterative simulation using supercomputers can furnish cost-effective answers to utility industry problems that are all but intractable using conventional computing equipment. An example of the use of supercomputers by the utility industry is the determination of power transfer capability limits for power transmission systems. This work has the goal of markedly reducing the run time of transient stability codes used to determine power distributions following major system disturbances. To date, run times of several hours on a conventional computer have been reduced to several minutes on state-of-the-art supercomputers, with further improvements anticipated to reduce run times to less than a minute. In spite of the potential advantages of supercomputers, few utilities have sufficient need for a dedicated in-house supercomputing capability. This problem is resolved using a supercomputer center serving a geographically distributed user base coupled via high speed communication networks

  16. Supercomputers to transform Science

    CERN Multimedia

    2006-01-01

    "New insights into the structure of space and time, climate modeling, and the design of novel drugs, are but a few of the many research areas that will be transforned by the installation of three supercomputers at the Unversity of Bristol." (1/2 page)

  17. Convex unwraps its first grown-up supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Manuel, T.

    1988-03-03

    Convex Computer Corp.'s new supercomputer family is even more of an industry blockbuster than its first system. At a tenfold jump in performance, it's far from just an incremental upgrade over its first minisupercomputer, the C-1. The heart of the new family, the new C-2 processor, churning at 50 million floating-point operations/s, spawns a group of systems whose performance could pass for some fancy supercomputers-namely those of the Cray Research Inc. family. When added to the C-1, Convex's five new supercomputers create the C series, a six-member product group offering a performance range from 20 to 200 Mflops. They mark an important transition for Convex from a one-product high-tech startup to a multinational company with a wide-ranging product line. It's a tough transition but the Richardson, Texas, company seems to be doing it. The extended product line propels Convex into the upper end of the minisupercomputer class and nudges it into the low end of the big supercomputers. It positions Convex in an uncrowded segment of the market in the $500,000 to $1 million range offering 50 to 200 Mflops of performance. The company is making this move because the minisuper area, which it pioneered, quickly became crowded with new vendors, causing prices and gross margins to drop drastically.

  18. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  19. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  20. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  1. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    Science.gov (United States)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed

  2. Automatic discovery of the communication network topology for building a supercomputer model

    Science.gov (United States)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  3. PNNL supercomputer to become largest computing resource on the Grid

    CERN Multimedia

    2002-01-01

    Hewlett Packard announced that the US DOE Pacific Northwest National Laboratory will connect a 9.3-teraflop HP supercomputer to the DOE Science Grid. This will be the largest supercomputer attached to a computer grid anywhere in the world (1 page).

  4. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    Science.gov (United States)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  5. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    Energy Technology Data Exchange (ETDEWEB)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S [Earth Sciences Department. Barcelona Supercomputing Center. Barcelona (Spain); Cuevas, E [Izanaa Atmospheric Research Center. Agencia Estatal de Meteorologia, Tenerife (Spain); Nickovic, S [Atmospheric Research and Environment Branch, World Meteorological Organization, Geneva (Switzerland)], E-mail: carlos.perez@bsc.es

    2009-03-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  6. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    International Nuclear Information System (INIS)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S; Cuevas, E; Nickovic, S

    2009-01-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  7. UbiWorld: An environment integrating virtual reality, supercomputing, and design

    Energy Technology Data Exchange (ETDEWEB)

    Disz, T.; Papka, M.E.; Stevens, R. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1997-07-01

    UbiWorld is a concept being developed by the Futures Laboratory group at Argonne National Laboratory that ties together the notion of ubiquitous computing (Ubicomp) with that of using virtual reality for rapid prototyping. The goal is to develop an environment where one can explore Ubicomp-type concepts without having to build real Ubicomp hardware. The basic notion is to extend object models in a virtual world by using distributed wide area heterogeneous computing technology to provide complex networking and processing capabilities to virtual reality objects.

  8. JINR supercomputer of the module type for event parallel analysis

    International Nuclear Information System (INIS)

    Kolpakov, I.F.; Senner, A.E.; Smirnov, V.A.

    1987-01-01

    A model of a supercomputer with 50 million of operations per second is suggested. Its realization allows one to solve JINR data analysis problems for large spectrometers (in particular DELPHY collaboration). The suggested module supercomputer is based on 32-bit commercial available microprocessor with a processing rate of about 1 MFLOPS. The processors are combined by means of VME standard busbars. MicroVAX-11 is a host computer organizing the operation of the system. Data input and output is realized via microVAX-11 computer periphery. Users' software is based on the FORTRAN-77. The supercomputer is connected with a JINR net port and all JINR users get an access to the suggested system

  9. Supercomputers and quantum field theory

    International Nuclear Information System (INIS)

    Creutz, M.

    1985-01-01

    A review is given of why recent simulations of lattice gauge theories have resulted in substantial demands from particle theorists for supercomputer time. These calculations have yielded first principle results on non-perturbative aspects of the strong interactions. An algorithm for simulating dynamical quark fields is discussed. 14 refs

  10. Evaluation of existing and proposed computer architectures for future ground-based systems

    Science.gov (United States)

    Schulbach, C.

    1985-01-01

    Parallel processing architectures and techniques used in current supercomputers are described and projections are made of future advances. Presently, the von Neumann sequential processing pattern has been accelerated by having separate I/O processors, interleaved memories, wide memories, independent functional units and pipelining. Recent supercomputers have featured single-input, multiple data stream architectures, which have different processors for performing various operations (vector or pipeline processors). Multiple input, multiple data stream machines have also been developed. Data flow techniques, wherein program instructions are activated only when data are available, are expected to play a large role in future supercomputers, along with increased parallel processor arrays. The enhanced operational speeds are essential for adequately treating data from future spacecraft remote sensing instruments such as the Thematic Mapper.

  11. Supercomputer applications in nuclear research

    International Nuclear Information System (INIS)

    Ishiguro, Misako

    1992-01-01

    The utilization of supercomputers in Japan Atomic Energy Research Institute is mainly reported. The fields of atomic energy research which use supercomputers frequently and the contents of their computation are outlined. What is vectorizing is simply explained, and nuclear fusion, nuclear reactor physics, the hydrothermal safety of nuclear reactors, the parallel property that the atomic energy computations of fluids and others have, the algorithm for vector treatment and the effect of speed increase by vectorizing are discussed. At present Japan Atomic Energy Research Institute uses two systems of FACOM VP 2600/10 and three systems of M-780. The contents of computation changed from criticality computation around 1970, through the analysis of LOCA after the TMI accident, to nuclear fusion research, the design of new type reactors and reactor safety assessment at present. Also the method of using computers advanced from batch processing to time sharing processing, from one-dimensional to three dimensional computation, from steady, linear to unsteady nonlinear computation, from experimental analysis to numerical simulation and so on. (K.I.)

  12. Computational plasma physics and supercomputers

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1984-09-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular codes, but parallel processing poses new coding difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematics

  13. Mistral Supercomputer Job History Analysis

    OpenAIRE

    Zasadziński, Michał; Muntés-Mulero, Victor; Solé, Marc; Ludwig, Thomas

    2018-01-01

    In this technical report, we show insights and results of operational data analysis from petascale supercomputer Mistral, which is ranked as 42nd most powerful in the world as of January 2018. Data sources include hardware monitoring data, job scheduler history, topology, and hardware information. We explore job state sequences, spatial distribution, and electric power patterns.

  14. Interactive real-time nuclear plant simulations on a UNIX based supercomputer

    International Nuclear Information System (INIS)

    Behling, S.R.

    1990-01-01

    Interactive real-time nuclear plant simulations are critically important to train nuclear power plant engineers and operators. In addition, real-time simulations can be used to test the validity and timing of plant technical specifications and operational procedures. To accurately and confidently simulate a nuclear power plant transient in real-time, sufficient computer resources must be available. Since some important transients cannot be simulated using preprogrammed responses or non-physical models, commonly used simulation techniques may not be adequate. However, the power of a supercomputer allows one to accurately calculate the behavior of nuclear power plants even during very complex transients. Many of these transients can be calculated in real-time or quicker on the fastest supercomputers. The concept of running interactive real-time nuclear power plant transients on a supercomputer has been tested. This paper describes the architecture of the simulation program, the techniques used to establish real-time synchronization, and other issues related to the use of supercomputers in a new and potentially very important area. (author)

  15. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy; Katz, Daniel S.; Binkowski, T. Andrew; Zhong, Xiaoliang; Heinonen, Olle; Karpeyev, Dmitry; Wilde, Michael

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt's sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.

  16. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    Science.gov (United States)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  17. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  18. Integration of PanDA workload management system with Titan supercomputer at OLCF

    Science.gov (United States)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  19. SUPERCOMPUTERS FOR AIDING ECONOMIC PROCESSES WITH REFERENCE TO THE FINANCIAL SECTOR

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2014-12-01

    Full Text Available The article discusses the use of supercomputers to support business processes with particular emphasis on the financial sector. A reference was made to the selected projects that support economic development. In particular, we propose the use of supercomputers to perform artificial intel-ligence methods in banking. The proposed methods combined with modern technology enables a significant increase in the competitiveness of enterprises and banks by adding new functionality.

  20. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sarje, Abhinav [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jacobsen, Douglas W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ringler, Todd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  1. Technologies for the people: a future in the making

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, D.C.

    2004-09-01

    India's post-independence policy of using science and technology for national development, and investment in research and development infrastructure resulted in success in space, atomic energy, missile development and supercomputing. Use of space technology has impacted directly or indirectly the vast majority of India's billion plus population. Developments in a number of emerging technologies in recent years hold the promise of impacting the future of ordinary Indians in significant ways, if a proper policy and enabling environment are provided. New telecom technologies - a digital rural exchange and a wireless access system - are beginning to touch the lives of common people. Development of a low-cost hand held computing device, use of hybrid telemedicine systems to extend modem healthcare to the unreached, and other innovative uses of IT at the grassroots also hold promise for the future. Biotechnology too has the potential to deliver cost-effective vaccines and drugs, but the future of GM crops is uncertain due to growing opposition. Some of these emerging technologies hold promise for future, provided a positive policy and enabling environment. (author)

  2. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  3. QCD on the BlueGene/L Supercomputer

    International Nuclear Information System (INIS)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-01-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented

  4. QCD on the BlueGene/L Supercomputer

    Science.gov (United States)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-03-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.

  5. Development of seismic tomography software for hybrid supercomputers

    Science.gov (United States)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on

  6. Proceedings of the first energy research power supercomputer users symposium

    International Nuclear Information System (INIS)

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. ''Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University

  7. Integration of Titan supercomputer at OLCF with ATLAS Production System

    Science.gov (United States)

    Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to

  8. A workbench for tera-flop supercomputing

    International Nuclear Information System (INIS)

    Resch, M.M.; Kuester, U.; Mueller, M.S.; Lang, U.

    2003-01-01

    Supercomputers currently reach a peak performance in the range of TFlop/s. With but one exception - the Japanese Earth Simulator - none of these systems has so far been able to also show a level of sustained performance for a variety of applications that comes close to the peak performance. Sustained TFlop/s are therefore rarely seen. The reasons are manifold and are well known: Bandwidth and latency both for main memory and for the internal network are the key internal technical problems. Cache hierarchies with large caches can bring relief but are no remedy to the problem. However, there are not only technical problems that inhibit the full exploitation by scientists of the potential of modern supercomputers. More and more organizational issues come to the forefront. This paper shows the approach of the High Performance Computing Center Stuttgart (HLRS) to deliver a sustained performance of TFlop/s for a wide range of applications from a large group of users spread over Germany. The core of the concept is the role of the data. Around this we design a simulation workbench that hides the complexity of interacting computers, networks and file systems from the user. (authors)

  9. A visual analytics system for optimizing the performance of large-scale networks in supercomputing systems

    Directory of Open Access Journals (Sweden)

    Takanori Fujiwara

    2018-03-01

    Full Text Available The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects. Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology. It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices, such as job scheduling and routing strategies. However, in order to study these temporal network behavior, we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly’s multi-level hierarchies. This paper presents such a tool–a visual analytics system–that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer. We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations. Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies, which effectively helps visual analysis tasks. We demonstrate the effectiveness of the system with a set of case studies. Our system and findings can not only help improve the communication performance of supercomputing applications, but also the network performance of next-generation supercomputers. Keywords: Supercomputing, Parallel communication network, Dragonfly networks, Time-series data, Performance analysis, Visual analytics

  10. KfK-seminar series on supercomputing und visualization from May till September 1992

    International Nuclear Information System (INIS)

    Hohenhinnebusch, W.

    1993-05-01

    During the period of may 1992 to september 1992 a series of seminars was held at KfK on several topics of supercomputing in different fields of application. The aim was to demonstrate the importance of supercomputing and visualization in numerical simulations of complex physical and technical phenomena. This report contains the collection of all submitted seminar papers. (orig./HP) [de

  11. Re-inventing electromagnetics - Supercomputing solution of Maxwell's equations via direct time integration on space grids

    International Nuclear Information System (INIS)

    Taflove, A.

    1992-01-01

    This paper summarizes the present state and future directions of applying finite-difference and finite-volume time-domain techniques for Maxwell's equations on supercomputers to model complex electromagnetic wave interactions with structures. Applications so far have been dominated by radar cross section technology, but by no means are limited to this area. In fact, the gains we have made place us on the threshold of being able to make tremendous contributions to non-defense electronics and optical technology. Some of the most interesting research in these commercial areas is summarized. 47 refs

  12. Computational plasma physics and supercomputers. Revision 1

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1985-01-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular models, but parallel processing poses new programming difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematical models

  13. Application of Supercomputer Technologies for Simulation Of Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Vladimir Valentinovich Okrepilov

    2015-06-01

    Full Text Available To date, an extensive experience has been accumulated in investigation of problems related to quality, assessment of management systems, modeling of economic system sustainability. The performed studies have created a basis for development of a new research area — Economics of Quality. Its tools allow to use opportunities of model simulation for construction of the mathematical models adequately reflecting the role of quality in natural, technical, social regularities of functioning of the complex socio-economic systems. Extensive application and development of models, and also system modeling with use of supercomputer technologies, on our deep belief, will bring the conducted research of socio-economic systems to essentially new level. Moreover, the current scientific research makes a significant contribution to model simulation of multi-agent social systems and that is not less important, it belongs to the priority areas in development of science and technology in our country. This article is devoted to the questions of supercomputer technologies application in public sciences, first of all, — regarding technical realization of the large-scale agent-focused models (AFM. The essence of this tool is that owing to the power computer increase it has become possible to describe the behavior of many separate fragments of a difficult system, as socio-economic systems are. The article also deals with the experience of foreign scientists and practicians in launching the AFM on supercomputers, and also the example of AFM developed in CEMI RAS, stages and methods of effective calculating kernel display of multi-agent system on architecture of a modern supercomputer will be analyzed. The experiments on the basis of model simulation on forecasting the population of St. Petersburg according to three scenarios as one of the major factors influencing the development of socio-economic system and quality of life of the population are presented in the

  14. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Klimentov, A [Brookhaven National Laboratory (BNL); Maeno, T [Brookhaven National Laboratory (BNL); Nilsson, P [Brookhaven National Laboratory (BNL); Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  15. Symbolic simulation of engineering systems on a supercomputer

    International Nuclear Information System (INIS)

    Ragheb, M.; Gvillo, D.; Makowitz, H.

    1986-01-01

    Model-Based Production-Rule systems for analysis are developed for the symbolic simulation of Complex Engineering systems on a CRAY X-MP Supercomputer. The Fault-Tree and Event-Tree Analysis methodologies from Systems-Analysis are used for problem representation and are coupled to the Rule-Based System Paradigm from Knowledge Engineering to provide modelling of engineering devices. Modelling is based on knowledge of the structure and function of the device rather than on human expertise alone. To implement the methodology, we developed a production-Rule Analysis System that uses both backward-chaining and forward-chaining: HAL-1986. The inference engine uses an Induction-Deduction-Oriented antecedent-consequent logic and is programmed in Portable Standard Lisp (PSL). The inference engine is general and can accommodate general modifications and additions to the knowledge base. The methodologies used will be demonstrated using a model for the identification of faults, and subsequent recovery from abnormal situations in Nuclear Reactor Safety Analysis. The use of the exposed methodologies for the prognostication of future device responses under operational and accident conditions using coupled symbolic and procedural programming is discussed

  16. Tryton Supercomputer Capabilities for Analysis of Massive Data Streams

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2015-09-01

    Full Text Available The recently deployed supercomputer Tryton, located in the Academic Computer Center of Gdansk University of Technology, provides great means for massive parallel processing. Moreover, the status of the Center as one of the main network nodes in the PIONIER network enables the fast and reliable transfer of data produced by miscellaneous devices scattered in the area of the whole country. The typical examples of such data are streams containing radio-telescope and satellite observations. Their analysis, especially with real-time constraints, can be challenging and requires the usage of dedicated software components. We propose a solution for such parallel analysis using the supercomputer, supervised by the KASKADA platform, which with the conjunction with immerse 3D visualization techniques can be used to solve problems such as pulsar detection and chronometric or oil-spill simulation on the sea surface.

  17. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  18. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  19. Guide to dataflow supercomputing basic concepts, case studies, and a detailed example

    CERN Document Server

    Milutinovic, Veljko; Trifunovic, Nemanja; Giorgi, Roberto

    2015-01-01

    This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; revie

  20. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  1. Cellular-automata supercomputers for fluid-dynamics modeling

    International Nuclear Information System (INIS)

    Margolus, N.; Toffoli, T.; Vichniac, G.

    1986-01-01

    We report recent developments in the modeling of fluid dynamics, and give experimental results (including dynamical exponents) obtained using cellular automata machines. Because of their locality and uniformity, cellular automata lend themselves to an extremely efficient physical realization; with a suitable architecture, an amount of hardware resources comparable to that of a home computer can achieve (in the simulation of cellular automata) the performance of a conventional supercomputer

  2. The TeraGyroid Experiment – Supercomputing 2003

    Directory of Open Access Journals (Sweden)

    R.J. Blake

    2005-01-01

    Full Text Available Amphiphiles are molecules with hydrophobic tails and hydrophilic heads. When dispersed in solvents, they self assemble into complex mesophases including the beautiful cubic gyroid phase. The goal of the TeraGyroid experiment was to study defect pathways and dynamics in these gyroids. The UK's supercomputing and USA's TeraGrid facilities were coupled together, through a dedicated high-speed network, into a single computational Grid for research work that peaked around the Supercomputing 2003 conference. The gyroids were modeled using lattice Boltzmann methods with parameter spaces explored using many 1283 and 3grid point simulations, this data being used to inform the world's largest three-dimensional time dependent simulation with 10243-grid points. The experiment generated some 2 TBytes of useful data. In terms of Grid technology the project demonstrated the migration of simulations (using Globus middleware to and fro across the Atlantic exploiting the availability of resources. Integration of the systems accelerated the time to insight. Distributed visualisation of the output datasets enabled the parameter space of the interactions within the complex fluid to be explored from a number of sites, informed by discourse over the Access Grid. The project was sponsored by EPSRC (UK and NSF (USA with trans-Atlantic optical bandwidth provided by British Telecommunications.

  3. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Science.gov (United States)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  4. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    International Nuclear Information System (INIS)

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; De, K; Oleynik, D; Jha, S; Wells, J

    2016-01-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  5. Future gripper needs in nuclear environments

    International Nuclear Information System (INIS)

    Ham, A.C. van der; Holweg, E.G.M.; Jongkind, W.

    1993-01-01

    This paper is concerned with the requirements of teleoperated grippers for work in hazardous situations and nuclear environments. A survey among users in the nuclear industry was performed by means of questionnaires of the present grippers in use and the future gripper needs. The survey covers reliability, tasks to be done, object properties, accuracy, environmental requirements, required grasps, mechanical and sensorial requirements. The paper will present the proposal for a future gripper. (author)

  6. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  7. Role of supercomputers in magnetic fusion and energy research programs

    International Nuclear Information System (INIS)

    Killeen, J.

    1985-06-01

    The importance of computer modeling in magnetic fusion (MFE) and energy research (ER) programs is discussed. The need for the most advanced supercomputers is described, and the role of the National Magnetic Fusion Energy Computer Center in meeting these needs is explained

  8. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  9. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    International Nuclear Information System (INIS)

    Cabrillo, I; Cabellos, L; Marco, J; Fernandez, J; Gonzalez, I

    2014-01-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  10. Plane-wave electronic structure calculations on a parallel supercomputer

    International Nuclear Information System (INIS)

    Nelson, J.S.; Plimpton, S.J.; Sears, M.P.

    1993-01-01

    The development of iterative solutions of Schrodinger's equation in a plane-wave (pw) basis over the last several years has coincided with great advances in the computational power available for performing the calculations. These dual developments have enabled many new and interesting condensed matter phenomena to be studied from a first-principles approach. The authors present a detailed description of the implementation on a parallel supercomputer (hypercube) of the first-order equation-of-motion solution to Schrodinger's equation, using plane-wave basis functions and ab initio separable pseudopotentials. By distributing the plane-waves across the processors of the hypercube many of the computations can be performed in parallel, resulting in decreases in the overall computation time relative to conventional vector supercomputers. This partitioning also provides ample memory for large Fast Fourier Transform (FFT) meshes and the storage of plane-wave coefficients for many hundreds of energy bands. The usefulness of the parallel techniques is demonstrated by benchmark timings for both the FFT's and iterations of the self-consistent solution of Schrodinger's equation for different sized Si unit cells of up to 512 atoms

  11. Problem solving in nuclear engineering using supercomputers

    International Nuclear Information System (INIS)

    Schmidt, F.; Scheuermann, W.; Schatz, A.

    1987-01-01

    The availability of supercomputers enables the engineer to formulate new strategies for problem solving. One such strategy is the Integrated Planning and Simulation System (IPSS). With the integrated systems, simulation models with greater consistency and good agreement with actual plant data can be effectively realized. In the present work some of the basic ideas of IPSS are described as well as some of the conditions necessary to build such systems. Hardware and software characteristics as realized are outlined. (orig.) [de

  12. FPS scientific and supercomputers computers in chemistry

    International Nuclear Information System (INIS)

    Curington, I.J.

    1987-01-01

    FPS Array Processors, scientific computers, and highly parallel supercomputers are used in nearly all aspects of compute-intensive computational chemistry. A survey is made of work utilizing this equipment, both published and current research. The relationship of the computer architecture to computational chemistry is discussed, with specific reference to Molecular Dynamics, Quantum Monte Carlo simulations, and Molecular Graphics applications. Recent installations of the FPS T-Series are highlighted, and examples of Molecular Graphics programs running on the FPS-5000 are shown

  13. Visualizing quantum scattering on the CM-2 supercomputer

    International Nuclear Information System (INIS)

    Richardson, J.L.

    1991-01-01

    We implement parallel algorithms for solving the time-dependent Schroedinger equation on the CM-2 supercomputer. These methods are unconditionally stable as well as unitary at each time step and have the advantage of being spatially local and explicit. We show how to visualize the dynamics of quantum scattering using techniques for visualizing complex wave functions. Several scattering problems are solved to demonstrate the use of these methods. (orig.)

  14. Supercomputer algorithms for reactivity, dynamics and kinetics of small molecules

    International Nuclear Information System (INIS)

    Lagana, A.

    1989-01-01

    Even for small systems, the accurate characterization of reactive processes is so demanding of computer resources as to suggest the use of supercomputers having vector and parallel facilities. The full advantages of vector and parallel architectures can sometimes be obtained by simply modifying existing programs, vectorizing the manipulation of vectors and matrices, and requiring the parallel execution of independent tasks. More often, however, a significant time saving can be obtained only when the computer code undergoes a deeper restructuring, requiring a change in the computational strategy or, more radically, the adoption of a different theoretical treatment. This book discusses supercomputer strategies based upon act and approximate methods aimed at calculating the electronic structure and the reactive properties of small systems. The book shows how, in recent years, intense design activity has led to the ability to calculate accurate electronic structures for reactive systems, exact and high-level approximations to three-dimensional reactive dynamics, and to efficient directive and declaratory software for the modelling of complex systems

  15. Research to application: Supercomputing trends for the 90's - Opportunities for interdisciplinary computations

    International Nuclear Information System (INIS)

    Shankar, V.

    1991-01-01

    The progression of supercomputing is reviewed from the point of view of computational fluid dynamics (CFD), and multidisciplinary problems impacting the design of advanced aerospace configurations are addressed. The application of full potential and Euler equations to transonic and supersonic problems in the 70s and early 80s is outlined, along with Navier-Stokes computations widespread during the late 80s and early 90s. Multidisciplinary computations currently in progress are discussed, including CFD and aeroelastic coupling for both static and dynamic flexible computations, CFD, aeroelastic, and controls coupling for flutter suppression and active control, and the development of a computational electromagnetics technology based on CFD methods. Attention is given to computational challenges standing in a way of the concept of establishing a computational environment including many technologies. 40 refs

  16. Highly parallel machines and future of scientific computing

    International Nuclear Information System (INIS)

    Singh, G.S.

    1992-01-01

    Computing requirement of large scale scientific computing has always been ahead of what state of the art hardware could supply in the form of supercomputers of the day. And for any single processor system the limit to increase in the computing power was realized a few years back itself. Now with the advent of parallel computing systems the availability of machines with the required computing power seems a reality. In this paper the author tries to visualize the future large scale scientific computing in the penultimate decade of the present century. The author summarized trends in parallel computers and emphasize the need for a better programming environment and software tools for optimal performance. The author concludes this paper with critique on parallel architectures, software tools and algorithms. (author). 10 refs., 2 tabs

  17. A visualization environment for supercomputing-based applications in computational mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Pavlakos, C.J.; Schoof, L.A.; Mareda, J.F.

    1993-06-01

    In this paper, we characterize a visualization environment that has been designed and prototyped for a large community of scientists and engineers, with an emphasis in superconducting-based computational mechanics. The proposed environment makes use of a visualization server concept to provide effective, interactive visualization to the user`s desktop. Benefits of using the visualization server approach are discussed. Some thoughts regarding desirable features for visualization server hardware architectures are also addressed. A brief discussion of the software environment is included. The paper concludes by summarizing certain observations which we have made regarding the implementation of such visualization environments.

  18. Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Supercomputing plays a major role in many areas of science and engineering, and it has had tremendous impact for decades in areas such as aerospace, defense, energy,...

  19. EDF's experience with supercomputing and challenges ahead - towards multi-physics and multi-scale approaches

    International Nuclear Information System (INIS)

    Delbecq, J.M.; Banner, D.

    2003-01-01

    Nuclear power plants are a major asset of the EDF company. To remain so, in particular in a context of deregulation, competitiveness, safety and public acceptance are three conditions. These stakes apply both to existing plants and to future reactors. The purpose of the presentation is to explain how supercomputing can help EDF to satisfy these requirements. Three examples are described in detail: ensuring optimal use of nuclear fuel under wholly safe conditions, understanding and simulating the material deterioration mechanisms and moving forward with numerical simulation for the performance of EDF's activities. In conclusion, a broader vision of EDF long term R and D in the field of numerical simulation is given and especially of five challenges taken up by EDF together with its industrial and scientific partners. (author)

  20. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  1. Intelligent Personal Supercomputer for Solving Scientific and Technical Problems

    Directory of Open Access Journals (Sweden)

    Khimich, O.M.

    2016-09-01

    Full Text Available New domestic intellіgent personal supercomputer of hybrid architecture Inparkom_pg for the mathematical modeling of processes in the defense industry, engineering, construction, etc. was developed. Intelligent software for the automatic research and tasks of computational mathematics with approximate data of different structures was designed. Applied software to provide mathematical modeling problems in construction, welding and filtration processes was implemented.

  2. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk (Kitware, Inc., Clifton Park, NY); Fabian, Nathan; Marion, Patrick (Kitware, Inc., Clifton Park, NY); Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  3. Global environment outlook GEO5. Environment for the future we want

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-05-15

    The main goal of UNEP's Global Environment Outlook (GEO) is to keep governments and stakeholders informed of the state and trends of the global environment. Over the past 15 years, the GEO reports have examined a wealth of data, information and knowledge about the global environment; identified potential policy responses; and provided an outlook for the future. The assessments, and their consultative and collaborative processes, have worked to bridge the gap between science and policy by turning the best available scientific knowledge into information relevant for decision makers. The GEO-5 report is made up of 17 chapters organized into three distinct but linked parts. Part 1 - State and trends of the global environment; Part 2 - Policy options from the regions; Part 3 - Opportunities for a global response.

  4. Global environment outlook GEO5. Environment for the future we want

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-05-15

    The main goal of UNEP's Global Environment Outlook (GEO) is to keep governments and stakeholders informed of the state and trends of the global environment. Over the past 15 years, the GEO reports have examined a wealth of data, information and knowledge about the global environment; identified potential policy responses; and provided an outlook for the future. The assessments, and their consultative and collaborative processes, have worked to bridge the gap between science and policy by turning the best available scientific knowledge into information relevant for decision makers. The GEO-5 report is made up of 17 chapters organized into three distinct but linked parts. Part 1 - State and trends of the global environment; Part 2 - Policy options from the regions; Part 3 - Opportunities for a global response.

  5. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen-Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2018-05-15

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time and supports DMA functionality allowing for parallel processing message-passing.

  6. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 2

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  7. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 1

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  8. Exploration and production environment. Preserving the future our responsibility

    International Nuclear Information System (INIS)

    2004-01-01

    This document presents the Total Group commitments to manage natural resources in a rational way, to preserve biodiversity for future generations and protect the environment. It contains the health, safety, environment and quality charter of Total, the 12 exploration and production health, safety and environment rules and the exploration and production environmental policy. (A.L.B.)

  9. New Mexico High School Supercomputing Challenge, 1990--1995: Five years of making a difference to students, teachers, schools, and communities. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Foster, M.; Kratzer, D.

    1996-02-01

    The New Mexico High School Supercomputing Challenge is an academic program dedicated to increasing interest in science and math among high school students by introducing them to high performance computing. This report provides a summary and evaluation of the first five years of the program, describes the program and shows the impact that it has had on high school students, their teachers, and their communities. Goals and objectives are reviewed and evaluated, growth and development of the program are analyzed, and future directions are discussed.

  10. Design and performance characterization of electronic structure calculations on massively parallel supercomputers

    DEFF Research Database (Denmark)

    Romero, N. A.; Glinsvad, Christian; Larsen, Ask Hjorth

    2013-01-01

    Density function theory (DFT) is the most widely employed electronic structure method because of its favorable scaling with system size and accuracy for a broad range of molecular and condensed-phase systems. The advent of massively parallel supercomputers has enhanced the scientific community...

  11. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  12. A user-friendly web portal for T-Coffee on supercomputers

    Directory of Open Access Journals (Sweden)

    Koetsier Jos

    2011-05-01

    Full Text Available Abstract Background Parallel T-Coffee (PTC was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

  13. Computational Science with the Titan Supercomputer: Early Outcomes and Lessons Learned

    Science.gov (United States)

    Wells, Jack

    2014-03-01

    Modeling and simulation with petascale computing has supercharged the process of innovation and understanding, dramatically accelerating time-to-insight and time-to-discovery. This presentation will focus on early outcomes from the Titan supercomputer at the Oak Ridge National Laboratory. Titan has over 18,000 hybrid compute nodes consisting of both CPUs and GPUs. In this presentation, I will discuss the lessons we have learned in deploying Titan and preparing applications to move from conventional CPU architectures to a hybrid machine. I will present early results of materials applications running on Titan and the implications for the research community as we prepare for exascale supercomputer in the next decade. Lastly, I will provide an overview of user programs at the Oak Ridge Leadership Computing Facility with specific information how researchers may apply for allocations of computing resources. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  14. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00300320; Klimentov, Alexei; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Vaniachine, Alexandre; Wenaus, Torre; Schovancova, Jaroslava

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real time, information about unused...

  15. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration; Klimentov, Alexei; Oleynik, Danila; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently uses more than 100,000 cores at well over 100 Grid sites with a peak performance of 0.3 petaFLOPS, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real tim...

  16. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    Science.gov (United States)

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  17. Explaining the gap between theoretical peak performance and real performance for supercomputer architectures

    International Nuclear Information System (INIS)

    Schoenauer, W.; Haefner, H.

    1993-01-01

    The basic architectures of vector and parallel computers with their properties are presented. Then the memory size and the arithmetic operations in the context of memory bandwidth are discussed. For the exemplary discussion of a single operation micro-measurements of the vector triad for the IBM 3090 VF and the CRAY Y-MP/8 are presented. They reveal the details of the losses for a single operation. Then we analyze the global performance of a whole supercomputer by identifying reduction factors that bring down the theoretical peak performance to the poor real performance. The responsibilities of the manufacturer and of the user for these losses are dicussed. Then the price-performance ratio for different architectures in a snapshot of January 1991 is briefly mentioned. Finally some remarks to a user-friendly architecture for a supercomputer will be made. (orig.)

  18. EDF's experience with supercomputing and challenges ahead - towards multi-physics and multi-scale approaches

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M.; Banner, D. [Electricite de France (EDF)- R and D Division, 92 - Clamart (France)

    2003-07-01

    Nuclear power plants are a major asset of the EDF company. To remain so, in particular in a context of deregulation, competitiveness, safety and public acceptance are three conditions. These stakes apply both to existing plants and to future reactors. The purpose of the presentation is to explain how supercomputing can help EDF to satisfy these requirements. Three examples are described in detail: ensuring optimal use of nuclear fuel under wholly safe conditions, understanding and simulating the material deterioration mechanisms and moving forward with numerical simulation for the performance of EDF's activities. In conclusion, a broader vision of EDF long term R and D in the field of numerical simulation is given and especially of five challenges taken up by EDF together with its industrial and scientific partners. (author)

  19. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-01-01

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  20. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  1. Urban warming in Tokyo area and counterplan to improve future environment

    International Nuclear Information System (INIS)

    Saitoh, T.S.; Hoshi, H.

    1993-01-01

    The rapid progress in industrialization and concentration of economic and social functions in urban areas has stimulated a consistent increase in population and energy consumption. The sudden urbanization in modern cities has caused environmental problems including alternation of the local climate. This is a phenomenon peculiar to the urban areas, and is characterized by a consistent rise in the temperature of the urban atmosphere, an increase in air pollutants, a decrease in relative humidity, and so on. The phenomenon characterized by a noticeable temperature rise in the urban atmosphere has been called the urban heat island and analyzed by both observational and numerical approaches. The numerical model can be classified into two ways: the mechanical model and energy balance model. Since Howard reported on the urban heat island in London, there have been a number of observational studies and numerical studies based on the two-dimensional modeling. Recently, three-dimensional studies have been reported simultaneously with great the advancement of the supercomputer. The present paper reports the results of the field observation by automobiles in the Tokyo metropolitan area and also the results of the three-dimensional simulation for urban warming in Tokyo at present and in the future around 2030. Further, the authors also present the results of a simulation for the effect of tree planting and vegetation

  2. An efficient implementation of a backpropagation learning algorithm on quadrics parallel supercomputer

    International Nuclear Information System (INIS)

    Taraglio, S.; Massaioli, F.

    1995-08-01

    A parallel implementation of a library to build and train Multi Layer Perceptrons via the Back Propagation algorithm is presented. The target machine is the SIMD massively parallel supercomputer Quadrics. Performance measures are provided on three different machines with different number of processors, for two network examples. A sample source code is given

  3. Supercomputing Centers and Electricity Service Providers

    DEFF Research Database (Denmark)

    Patki, Tapasya; Bates, Natalie; Ghatikar, Girish

    2016-01-01

    from a detailed, quantitative survey-based analysis and compare the perspectives of the European grid and SCs to the ones of the United States (US). We then show that contrary to the expectation, SCs in the US are more open toward cooperating and developing demand-management strategies with their ESPs......Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates...... this problem. In order to develop a symbiotic relationship between the SCs and their ESPs and to support effective power management at all levels, it is critical to understand and analyze how the existing relationships were formed and how these are expected to evolve. In this paper, we first present results...

  4. Energy-water-environment nexus underpinning future desalination sustainability

    KAUST Repository

    Shahzad, Muhammad Wakil

    2017-03-11

    Energy-water-environment nexus is very important to attain COP21 goal, maintaining environment temperature increase below 2°C, but unfortunately two third share of CO2 emission has already been used and the remaining will be exhausted by 2050. A number of technological developments in power and desalination sectors improved their efficiencies to save energy and carbon emission but still they are operating at 35% and 10% of their thermodynamic limits. Research in desalination processes contributing to fuel World population for their improved living standard and to reduce specific energy consumption and to protect environment. Recently developed highly efficient nature-inspired membranes (aquaporin & graphene) and trend in thermally driven cycle\\'s hybridization could potentially lower then energy requirement for water purification. This paper presents a state of art review on energy, water and environment interconnection and future energy efficient desalination possibilities to save energy and protect environment.

  5. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  6. Supercomputers and the mathematical modeling of high complexity problems

    International Nuclear Information System (INIS)

    Belotserkovskii, Oleg M

    2010-01-01

    This paper is a review of many works carried out by members of our scientific school in past years. The general principles of constructing numerical algorithms for high-performance computers are described. Several techniques are highlighted and these are based on the method of splitting with respect to physical processes and are widely used in computing nonlinear multidimensional processes in fluid dynamics, in studies of turbulence and hydrodynamic instabilities and in medicine and other natural sciences. The advances and developments related to the new generation of high-performance supercomputing in Russia are presented.

  7. Role of nuclear fusion in future energy systems and the environment under future uncertainties

    International Nuclear Information System (INIS)

    Tokimatsu, Koji; Fujino, Jun'ichi; Konishi, Satoshi; Ogawa, Yuichi; Yamaji, Kenji

    2003-01-01

    Debates about whether or not to invest heavily in nuclear fusion as a future innovative energy option have been made within the context of energy technology development strategies. This is because the prospects for nuclear fusion are quite uncertain and the investments therefore carry the risk of quite large regrets, even though investment is needed in order to develop the technology. The timeframe by which nuclear fusion could become competitive in the energy market has not been adequately studied, nor has roles of the nuclear fusion in energy systems and the environment. The present study has two objectives. One is to reveal the conditions under which nuclear fusion could be introduced economically (hereafter, we refer to such introductory conditions as breakeven prices) in future energy systems. The other objective is to evaluate the future roles of nuclear fusion in energy systems and in the environment. Here we identify three roles that nuclear fusion will take on when breakeven prices are achieved: (i) a portion of the electricity market in 2100, (ii) reduction of annual global total energy systems cost, and (iii) mitigation of carbon tax (shadow price of carbon) under CO 2 constraints. Future uncertainties are key issues in evaluating nuclear fusion. Here we treated the following uncertainties: energy demand scenarios, introduction timeframe for nuclear fusion, capacity projections of nuclear fusion, CO 2 target in 2100, capacity utilization ratio of options in energy/environment technologies, and utility discount rates. From our investigations, we conclude that the presently designed nuclear fusion reactors may be ready for economical introduction into energy systems beginning around 2050-2060, and we can confirm that the favorable introduction of the reactors would reduce both the annual energy systems cost and the carbon tax (the shadow price of carbon) under a CO 2 concentration constraint

  8. Heat dissipation computations of a HVDC ground electrode using a supercomputer

    International Nuclear Information System (INIS)

    Greiss, H.; Mukhedkar, D.; Lagace, P.J.

    1990-01-01

    This paper reports on the temperature, of soil surrounding a High Voltage Direct Current (HVDC) toroidal ground electrode of practical dimensions, in both homogeneous and non-homogeneous soils that was computed at incremental points in time using finite difference methods on a supercomputer. Curves of the response were computed and plotted at several locations within the soil in the vicinity of the ground electrode for various values of the soil parameters

  9. Argonne National Lab deploys Force10 networks' massively dense ethernet switch for supercomputing cluster

    CERN Multimedia

    2003-01-01

    "Force10 Networks, Inc. today announced that Argonne National Laboratory (Argonne, IL) has successfully deployed Force10 E-Series switch/routers to connect to the TeraGrid, the world's largest supercomputing grid, sponsored by the National Science Foundation (NSF)" (1/2 page).

  10. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    Science.gov (United States)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are

  11. Japanese issues on the future behavior of the geological environment

    International Nuclear Information System (INIS)

    Aoki, Kaz; Nakatsuka, Noboru; Ishimaru, Tsuneari

    1994-01-01

    Comprehending and predicting the future states of the geological environment is very important in ensuring a safe geological disposal of high level radioactive wastes (HLW). This paper is one in a series of studies required to ascertain the existence of a geologically stable area in Japan over the long term. In particular, interest is focussed on the aspect of accumulating data on behavior patterns of selected natural phenomena which will enable predictions of future behavior of geological processes and finding of areas of long term stability. While this paper limits itself to the second and part of the third step, the overall flow-chart of study on natural processes and events which may perturb the geological environment entails three major steps. They include: (i) identification of natural processes and events relevant to long term stability of geological environment to be evaluated; (ii) characterization of the identified natural processes and events; and (iii) prediction of the probability of occurrence, magnitude and influence of the natural processes and events which may perturb the geological environment. (J.P.N)

  12. A supercomputing application for reactors core design and optimization

    International Nuclear Information System (INIS)

    Hourcade, Edouard; Gaudier, Fabrice; Arnaud, Gilles; Funtowiez, David; Ammar, Karim

    2010-01-01

    Advanced nuclear reactor designs are often intuition-driven processes where designers first develop or use simplified simulation tools for each physical phenomenon involved. Through the project development, complexity in each discipline increases and implementation of chaining/coupling capabilities adapted to supercomputing optimization process are often postponed to a further step so that task gets increasingly challenging. In the context of renewal in reactor designs, project of first realization are often run in parallel with advanced design although very dependant on final options. As a consequence, the development of tools to globally assess/optimize reactor core features, with the on-going design methods accuracy, is needed. This should be possible within reasonable simulation time and without advanced computer skills needed at project management scale. Also, these tools should be ready to easily cope with modeling progresses in each discipline through project life-time. An early stage development of multi-physics package adapted to supercomputing is presented. The URANIE platform, developed at CEA and based on the Data Analysis Framework ROOT, is very well adapted to this approach. It allows diversified sampling techniques (SRS, LHS, qMC), fitting tools (neuronal networks...) and optimization techniques (genetic algorithm). Also data-base management and visualization are made very easy. In this paper, we'll present the various implementing steps of this core physics tool where neutronics, thermo-hydraulics, and fuel mechanics codes are run simultaneously. A relevant example of optimization of nuclear reactor safety characteristics will be presented. Also, flexibility of URANIE tool will be illustrated with the presentation of several approaches to improve Pareto front quality. (author)

  13. HeNCE: A Heterogeneous Network Computing Environment

    Directory of Open Access Journals (Sweden)

    Adam Beguelin

    1994-01-01

    Full Text Available Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM. The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.

  14. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  15. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    Science.gov (United States)

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  16. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    Science.gov (United States)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  17. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  18. Visualization system for grid environment in the nuclear field

    International Nuclear Information System (INIS)

    Suzuki, Yoshio; Matsumoto, Nobuko; Idomura, Yasuhiro; Tani, Masayuki

    2006-01-01

    An innovative scientific visualization system is needed to integratedly visualize large amount of data which are distributedly generated in remote locations as a result of a large-scale numerical simulation using a grid environment. One of the important functions in such a visualization system is a parallel visualization which enables to visualize data using multiple CPUs of a supercomputer. The other is a distributed visualization which enables to execute visualization processes using a local client computer and remote computers. We have developed a toolkit including these functions in cooperation with the commercial visualization software AVS/Express, called Parallel Support Toolkit (PST). PST can execute visualization processes with three kinds of parallelism (data parallelism, task parallelism and pipeline parallelism) using local and remote computers. We have evaluated PST for large amount of data generated by a nuclear fusion simulation. Here, two supercomputers Altix3700Bx2 and Prism installed in JAEA are used. From the evaluation, it can be seen that PST has a potential to efficiently visualize large amount of data in a grid environment. (author)

  19. Quantum Hamiltonian Physics with Supercomputers

    International Nuclear Information System (INIS)

    Vary, James P.

    2014-01-01

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed

  20. Quantum Hamiltonian Physics with Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Vary, James P.

    2014-06-15

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed.

  1. Coherent 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an Optimal Supercomputer Optical Switch Fabric

    DEFF Research Database (Denmark)

    Karinou, Fotini; Borkowski, Robert; Zibar, Darko

    2013-01-01

    We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates.......We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates....

  2. Feynman diagrams sampling for quantum field theories on the QPACE 2 supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Rappl, Florian

    2016-08-01

    This work discusses the application of Feynman diagram sampling in quantum field theories. The method uses a computer simulation to sample the diagrammatic space obtained in a series expansion. For running large physical simulations powerful computers are obligatory, effectively splitting the thesis in two parts. The first part deals with the method of Feynman diagram sampling. Here the theoretical background of the method itself is discussed. Additionally, important statistical concepts and the theory of the strong force, quantum chromodynamics, are introduced. This sets the context of the simulations. We create and evaluate a variety of models to estimate the applicability of diagrammatic methods. The method is then applied to sample the perturbative expansion of the vertex correction. In the end we obtain the value for the anomalous magnetic moment of the electron. The second part looks at the QPACE 2 supercomputer. This includes a short introduction to supercomputers in general, as well as a closer look at the architecture and the cooling system of QPACE 2. Guiding benchmarks of the InfiniBand network are presented. At the core of this part, a collection of best practices and useful programming concepts are outlined, which enables the development of efficient, yet easily portable, applications for the QPACE 2 system.

  3. Use of high performance networks and supercomputers for real-time flight simulation

    Science.gov (United States)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  4. Requirements for user interaction support in future CACE environments

    DEFF Research Database (Denmark)

    Ravn, Ole; Szymkat, M.

    1994-01-01

    Based on a review of user interaction modes and the specific needs of the CACE domain the paper describes requirements for user interaction in future CACE environments. Taking another look at the design process in CACE key areas in need of more user interaction support are pointed out. Three...

  5. A fast random number generator for the Intel Paragon supercomputer

    Science.gov (United States)

    Gutbrod, F.

    1995-06-01

    A pseudo-random number generator is presented which makes optimal use of the architecture of the i860-microprocessor and which is expected to have a very long period. It is therefore a good candidate for use on the parallel supercomputer Paragon XP. In the assembler version, it needs 6.4 cycles for a real∗4 random number. There is a FORTRAN routine which yields identical numbers up to rare and minor rounding discrepancies, and it needs 28 cycles. The FORTRAN performance on other microprocessors is somewhat better. Arguments for the quality of the generator and some numerical tests are given.

  6. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    Science.gov (United States)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  7. Parallel simulation of tsunami inundation on a large-scale supercomputer

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the

  8. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  9. Perspectives on Emerging/Novel Computing Paradigms and Future Aerospace Workforce Environments

    Science.gov (United States)

    Noor, Ahmed K.

    2003-01-01

    The accelerating pace of the computing technology development shows no signs of abating. Computing power reaching 100 Tflop/s is likely to be reached by 2004 and Pflop/s (10(exp 15) Flop/s) by 2007. The fundamental physical limits of computation, including information storage limits, communication limits and computation rate limits will likely be reached by the middle of the present millennium. To overcome these limits, novel technologies and new computing paradigms will be developed. An attempt is made in this overview to put the diverse activities related to new computing-paradigms in perspective and to set the stage for the succeeding presentations. The presentation is divided into five parts. In the first part, a brief historical account is given of development of computer and networking technologies. The second part provides brief overviews of the three emerging computing paradigms grid, ubiquitous and autonomic computing. The third part lists future computing alternatives and the characteristics of future computing environment. The fourth part describes future aerospace workforce research, learning and design environments. The fifth part lists the objectives of the workshop and some of the sources of information on future computing paradigms.

  10. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  11. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-03-29

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  12. Environment issues and the future of the transport industry

    Energy Technology Data Exchange (ETDEWEB)

    Shiller, J W [Ford Motor Company, Dearborn, MI (USA)

    1992-01-01

    The motor vehicle industry must make the necessary investment in products and technology to meet the competitive and environmental challenges of the future. Discussion is presented of: the history of motor vehicles, the relationship of motor vehicles to the environment, the state of climate change knowledge, future economic development and the transport sector, the changing structure of the motor vehicle fleet, traffic congestion, alternative fuels, investments in transport, the European Energy Charter, The US Energy Strategy, the North American free trade agreement, and the economics of the automobile industry in Japan/South East Asia and the developing countries. 61 refs., 29 figs., 28 tabs.

  13. Simulation of x-rays in refractive structure by the Monte Carlo method using the supercomputer SKIF

    International Nuclear Information System (INIS)

    Yaskevich, Yu.R.; Kravchenko, O.I.; Soroka, I.I.; Chembrovskij, A.G.; Kolesnik, A.S.; Serikova, N.V.; Petrov, P.V.; Kol'chevskij, N.N.

    2013-01-01

    Software 'Xray-SKIF' for the simulation of the X-rays in refractive structures by the Monte-Carlo method using the supercomputer SKIF BSU are developed. The program generates a large number of rays propagated from a source to the refractive structure. The ray trajectory under assumption of geometrical optics is calculated. Absorption is calculated for each ray inside of refractive structure. Dynamic arrays are used for results of calculation rays parameters, its restore the X-ray field distributions very fast at different position of detector. It was found that increasing the number of processors leads to proportional decreasing of calculation time: simulation of 10 8 X-rays using supercomputer with the number of processors from 1 to 30 run-times equal 3 hours and 6 minutes, respectively. 10 9 X-rays are calculated by software 'Xray-SKIF' which allows to reconstruct the X-ray field after refractive structure with a special resolution of 1 micron. (authors)

  14. ASCI's Vision for supercomputing future

    International Nuclear Information System (INIS)

    Nowak, N.D.

    2003-01-01

    The full text of publication follows. Advanced Simulation and Computing (ASC, formerly Accelerated Strategic Computing Initiative [ASCI]) was established in 1995 to help Defense Programs shift from test-based confidence to simulation-based confidence. Specifically, ASC is a focused and balanced program that is accelerating the development of simulation capabilities needed to analyze and predict the performance, safety, and reliability of nuclear weapons and certify their functionality - far exceeding what might have been achieved in the absence of a focused initiative. To realize its vision, ASC is creating simulation and proto-typing capabilities, based on advanced weapon codes and high-performance computing

  15. The pan-European environment: glimpses into an uncertain future

    International Nuclear Information System (INIS)

    2007-01-01

    The rapidly changing nature of and increasing inter-linkages between many socio-economic phenomena - population growth and migration, globalisation and trade, personal consumption patterns and use of natural resources . are reflected in many of today's environment policy priorities: minimising and adapting to climate change; loss of biodiversity and ecosystem services; the degradation of such natural resources as land, freshwater and oceans; and the impacts of a wide range of pollutants on our environment and our health. The challenges that environmental policy makers are facing in this century are already very different from those of the last. Given the rapid change in socio.economic trends, both designing and implementing actions are becoming much more complex, and the way in which such policies deliver effective outcomes seems to be becoming increasingly uncertain. Alongside this, the time.lags between policy demands and institutional responses are often lengthening, with the institutional structures charged with designing and implementing agreed actions needing to change in order to keep up with this process. This report aims to contribute to the discussion about plausible future developments relevant to the wider European region and to stimulate medium to long-term thinking in policy-making circles. It does so by sketching some of the key environmental concerns for the pan-European region based on the EEA's Europe's environment - The fourth assessment, and by highlighting some of the many uncertainties the future holds. (au)

  16. Plastics, the environment and human health: current consensus and future trends

    OpenAIRE

    Thompson, Richard C.; Moore, Charles J.; vom Saal, Frederick S.; Swan, Shanna H.

    2009-01-01

    Plastics have transformed everyday life; usage is increasing and annual production is likely to exceed 300 million tonnes by 2010. In this concluding paper to the Theme Issue on Plastics, the Environment and Human Health, we synthesize current understanding of the benefits and concerns surrounding the use of plastics and look to future priorities, challenges and opportunities. It is evident that plastics bring many societal benefits and offer future technological and medical advances. However...

  17. The Potential of Simulated Environments in Teacher Education: Current and Future Possibilities

    Science.gov (United States)

    Dieker, Lisa A.; Rodriguez, Jacqueline A.; Lignugaris/Kraft, Benjamin; Hynes, Michael C.; Hughes, Charles E.

    2014-01-01

    The future of virtual environments is evident in many fields but is just emerging in the field of teacher education. In this article, the authors provide a summary of the evolution of simulation in the field of teacher education and three factors that need to be considered as these environments further develop. The authors provide a specific…

  18. Compiler and Runtime Support for Programming in Adaptive Parallel Environments

    Science.gov (United States)

    1998-10-15

    noother job is waiting for resources, and use a smaller number of processors when other jobs needresources. Setia et al. [15, 20] have shown that such...15] Vijay K. Naik, Sanjeev Setia , and Mark Squillante. Performance analysis of job scheduling policiesin parallel supercomputing environments. In...on networks ofheterogeneous workstations. Technical Report CSE-94-012, Oregon Graduate Institute of Scienceand Technology, 1994.[20] Sanjeev Setia

  19. The future regulatory environment - a South African perspective

    International Nuclear Information System (INIS)

    Van der Woude, S.; Leaver, J.; Metcalf, P.E.

    2000-01-01

    The South African nuclear regulatory authority, the National Nuclear Regulator, regulates nuclear fuel cycle facilities as well as a large variety of mining and minerals processing activities. The future political, social, economical and technological environment, within which these facilities operate, will present numerous challenges to those who will be regulating them. In our presentation the challenges to be fulfilled in discharging the regulatory function are discussed, particularly in the context of a country with a small nuclear programme and a substantial developing component. Amongst the challenges discussed are: As part of the growing internationalization, the need to harmonize standards applied in different countries and the need to balance standards and practice applied in developed countries with resources available in developing countries; The need to consider the impact on the environment and not only on human beings; The impact of rapid advances in information technology on regulation; The maintenance and development of the appropriate expertise in the face of uncertainties regarding the future of the nuclear industry; Public involvement; The demands by society for greater standards of safety but at the same time for more effective and cost-effective regulation; The need for regulators to match customer demands on operators in terms of quality, speed, flexibility and costs; The privatization of nuclear fuel cycle facilities; The increased trend for larger facilities to outsource work to smaller companies; and, The need to balance good practice considerations with quantitatively determined risks in regulatory decision-making. (author)

  20. The future regulatory environment - a South African perspective

    Energy Technology Data Exchange (ETDEWEB)

    Van der Woude, S.; Leaver, J.; Metcalf, P.E. [National Nuclear Regulator, Centurion (South Africa)

    2000-07-01

    The South African nuclear regulatory authority, the National Nuclear Regulator, regulates nuclear fuel cycle facilities as well as a large variety of mining and minerals processing activities. The future political, social, economical and technological environment, within which these facilities operate, will present numerous challenges to those who will be regulating them. In our presentation the challenges to be fulfilled in discharging the regulatory function are discussed, particularly in the context of a country with a small nuclear programme and a substantial developing component. Amongst the challenges discussed are: As part of the growing internationalization, the need to harmonize standards applied in different countries and the need to balance standards and practice applied in developed countries with resources available in developing countries; The need to consider the impact on the environment and not only on human beings; The impact of rapid advances in information technology on regulation; The maintenance and development of the appropriate expertise in the face of uncertainties regarding the future of the nuclear industry; Public involvement; The demands by society for greater standards of safety but at the same time for more effective and cost-effective regulation; The need for regulators to match customer demands on operators in terms of quality, speed, flexibility and costs; The privatization of nuclear fuel cycle facilities; The increased trend for larger facilities to outsource work to smaller companies; and, The need to balance good practice considerations with quantitatively determined risks in regulatory decision-making. (author)

  1. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; D' Azevedo, Eduardo [ORNL; Philip, Bobby [ORNL; Worley, Patrick H [ORNL

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  2. Plasma turbulence calculations on supercomputers

    International Nuclear Information System (INIS)

    Carreras, B.A.; Charlton, L.A.; Dominguez, N.; Drake, J.B.; Garcia, L.; Leboeuf, J.N.; Lee, D.K.; Lynch, V.E.; Sidikman, K.

    1991-01-01

    Although the single-particle picture of magnetic confinement is helpful in understanding some basic physics of plasma confinement, it does not give a full description. Collective effects dominate plasma behavior. Any analysis of plasma confinement requires a self-consistent treatment of the particles and fields. The general picture is further complicated because the plasma, in general, is turbulent. The study of fluid turbulence is a rather complex field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples back to the fluid. Since the fluid is not a perfect conductor, this turbulence can lead to changes in the topology of the magnetic field structure, causing the magnetic field lines to wander radially. Because the plasma fluid flows along field lines, they carry the particles with them, and this enhances the losses caused by collisions. The changes in topology are critical for the plasma confinement. The study of plasma turbulence and the concomitant transport is a challenging problem. Because of the importance of solving the plasma turbulence problem for controlled thermonuclear research, the high complexity of the problem, and the necessity of attacking the problem with supercomputers, the study of plasma turbulence in magnetic confinement devices is a Grand Challenge problem

  3. Reactive flow simulations in complex geometries with high-performance supercomputing

    International Nuclear Information System (INIS)

    Rehm, W.; Gerndt, M.; Jahn, W.; Vogelsang, R.; Binninger, B.; Herrmann, M.; Olivier, H.; Weber, M.

    2000-01-01

    In this paper, we report on a modern field code cluster consisting of state-of-the-art reactive Navier-Stokes- and reactive Euler solvers that has been developed on vector- and parallel supercomputers at the research center Juelich. This field code cluster is used for hydrogen safety analyses of technical systems, for example, in the field of nuclear reactor safety and conventional hydrogen demonstration plants with fuel cells. Emphasis is put on the assessment of combustion loads, which could result from slow, fast or rapid flames, including transition from deflagration to detonation. As a sample of proof tests, the special tools have been tested for specific tasks, based on the comparison of experimental and numerical results, which are in reasonable agreement. (author)

  4. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  5. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; Kumar, Jitendra [ORNL; Mills, Richard T. [Argonne National Laboratory; Hoffman, Forrest M. [ORNL; Sripathi, Vamsi [Intel Corporation; Hargrove, William Walter [United States Department of Agriculture (USDA), United States Forest Service (USFS)

    2017-09-01

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like the Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.

  6. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  7. DCE. Future IHEP's computing environment

    International Nuclear Information System (INIS)

    Zheng Guorui; Liu Xiaoling

    1995-01-01

    IHEP'S computing environment consists of several different computing environments established on IHEP computer networks. In which, the BES environment supported HEP computing is the main part of IHEP computing environment. Combining with the procedure of improvement and extension of BES environment, the authors describe development of computing environments in outline as viewed from high energy physics (HEP) environment establishment. The direction of developing to distributed computing of the IHEP computing environment based on the developing trend of present distributed computing is presented

  8. Retail food environments research: Promising future with more work to be done.

    Science.gov (United States)

    Fuller, Daniel; Engler-Stringer, Rachel; Muhajarine, Nazeem

    2016-06-09

    As members of the scientific committee for the Food Environments in Canada conference, we reflect on the current state of food environments research in Canada. We are very encouraged that the field is growing and there have been many collaborative efforts to link researchers in Canada, including the 2015 Food Environments in Canada Symposium and Workshop. We believe there are 5 key challenges the field will need to collectively address: theory and causality; replication and extension; consideration of rural, northern and vulnerable populations; policy analysis; and intervention research. In addressing the challenges, we look forward to working together to conduct more sophisticated, complex and community-driven food environments research in the future.

  9. Combining density functional theory calculations, supercomputing, and data-driven methods to design new materials (Conference Presentation)

    Science.gov (United States)

    Jain, Anubhav

    2017-04-01

    Density functional theory (DFT) simulations solve for the electronic structure of materials starting from the Schrödinger equation. Many case studies have now demonstrated that researchers can often use DFT to design new compounds in the computer (e.g., for batteries, catalysts, and hydrogen storage) before synthesis and characterization in the lab. In this talk, I will focus on how DFT calculations can be executed on large supercomputing resources in order to generate very large data sets on new materials for functional applications. First, I will briefly describe the Materials Project, an effort at LBNL that has virtually characterized over 60,000 materials using DFT and has shared the results with over 17,000 registered users. Next, I will talk about how such data can help discover new materials, describing how preliminary computational screening led to the identification and confirmation of a new family of bulk AMX2 thermoelectric compounds with measured zT reaching 0.8. I will outline future plans for how such data-driven methods can be used to better understand the factors that control thermoelectric behavior, e.g., for the rational design of electronic band structures, in ways that are different from conventional approaches.

  10. Use of QUADRICS supercomputer as embedded simulator in emergency management systems

    International Nuclear Information System (INIS)

    Bove, R.; Di Costanzo, G.; Ziparo, A.

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system

  11. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Science.gov (United States)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  12. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Directory of Open Access Journals (Sweden)

    DeTar Carleton

    2018-01-01

    Full Text Available With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  13. SUPERCOMPUTER SIMULATION OF CRITICAL PHENOMENA IN COMPLEX SOCIAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Petrus M.A. Sloot

    2014-09-01

    Full Text Available The paper describes a problem of computer simulation of critical phenomena in complex social systems on a petascale computing systems in frames of complex networks approach. The three-layer system of nested models of complex networks is proposed including aggregated analytical model to identify critical phenomena, detailed model of individualized network dynamics and model to adjust a topological structure of a complex network. The scalable parallel algorithm covering all layers of complex networks simulation is proposed. Performance of the algorithm is studied on different supercomputing systems. The issues of software and information infrastructure of complex networks simulation are discussed including organization of distributed calculations, crawling the data in social networks and results visualization. The applications of developed methods and technologies are considered including simulation of criminal networks disruption, fast rumors spreading in social networks, evolution of financial networks and epidemics spreading.

  14. Lectures in Supercomputational Neurosciences Dynamics in Complex Brain Networks

    CERN Document Server

    Graben, Peter beim; Thiel, Marco; Kurths, Jürgen

    2008-01-01

    Computational Neuroscience is a burgeoning field of research where only the combined effort of neuroscientists, biologists, psychologists, physicists, mathematicians, computer scientists, engineers and other specialists, e.g. from linguistics and medicine, seem to be able to expand the limits of our knowledge. The present volume is an introduction, largely from the physicists' perspective, to the subject matter with in-depth contributions by system neuroscientists. A conceptual model for complex networks of neurons is introduced that incorporates many important features of the real brain, such as various types of neurons, various brain areas, inhibitory and excitatory coupling and the plasticity of the network. The computational implementation on supercomputers, which is introduced and discussed in detail in this book, will enable the readers to modify and adapt the algortihm for their own research. Worked-out examples of applications are presented for networks of Morris-Lecar neurons to model the cortical co...

  15. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  16. The past, present, and future of test and research reactor physics

    International Nuclear Information System (INIS)

    Ryskamp, J.M.

    1992-01-01

    Reactor physics calculations have been performed on research reactors since the first one was built 50 yr ago under the University of Chicago stadium. Since then, reactor physics calculations have evolved from Fermi-age theory calculations performed with slide rules to three-dimensional, continuous-energy, coupled neutron-photon Monte Carlo computations performed with supercomputers and workstations. Such enormous progress in reactor physics leads us to believe that the next 50 year will be just as exciting. This paper reviews this transition from the past to the future

  17. The Future of Coral Reefs Subject to Rapid Climate Change: Lessons from Natural Extreme Environments

    Directory of Open Access Journals (Sweden)

    Emma F. Camp

    2018-02-01

    Full Text Available Global climate change and localized anthropogenic stressors are driving rapid declines in coral reef health. In vitro experiments have been fundamental in providing insight into how reef organisms will potentially respond to future climates. However, such experiments are inevitably limited in their ability to reproduce the complex interactions that govern reef systems. Studies examining coral communities that already persist under naturally-occurring extreme and marginal physicochemical conditions have therefore become increasingly popular to advance ecosystem scale predictions of future reef form and function, although no single site provides a perfect analog to future reefs. Here we review the current state of knowledge that exists on the distribution of corals in marginal and extreme environments, and geographic sites at the latitudinal extremes of reef growth, as well as a variety of shallow reef systems and reef-neighboring environments (including upwelling and CO2 vent sites. We also conduct a synthesis of the abiotic data that have been collected at these systems, to provide the first collective assessment on the range of extreme conditions under which corals currently persist. We use the review and data synthesis to increase our understanding of the biological and ecological mechanisms that facilitate survival and success under sub-optimal physicochemical conditions. This comprehensive assessment can begin to: (i highlight the extent of extreme abiotic scenarios under which corals can persist, (ii explore whether there are commonalities in coral taxa able to persist in such extremes, (iii provide evidence for key mechanisms required to support survival and/or persistence under sub-optimal environmental conditions, and (iv evaluate the potential of current sub-optimal coral environments to act as potential refugia under changing environmental conditions. Such a collective approach is critical to better understand the future survival of

  18. Advanced Architectures for Astrophysical Supercomputing

    Science.gov (United States)

    Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.

    2010-12-01

    Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.

  19. Micro-mechanical Simulations of Soils using Massively Parallel Supercomputers

    Directory of Open Access Journals (Sweden)

    David W. Washington

    2004-06-01

    Full Text Available In this research a computer program, Trubal version 1.51, based on the Discrete Element Method was converted to run on a Connection Machine (CM-5,a massively parallel supercomputer with 512 nodes, to expedite the computational times of simulating Geotechnical boundary value problems. The dynamic memory algorithm in Trubal program did not perform efficiently in CM-2 machine with the Single Instruction Multiple Data (SIMD architecture. This was due to the communication overhead involving global array reductions, global array broadcast and random data movement. Therefore, a dynamic memory algorithm in Trubal program was converted to a static memory arrangement and Trubal program was successfully converted to run on CM-5 machines. The converted program was called "TRUBAL for Parallel Machines (TPM." Simulating two physical triaxial experiments and comparing simulation results with Trubal simulations validated the TPM program. With a 512 nodes CM-5 machine TPM produced a nine-fold speedup demonstrating the inherent parallelism within algorithms based on the Discrete Element Method.

  20. Large scale simulations of lattice QCD thermodynamics on Columbia Parallel Supercomputers

    International Nuclear Information System (INIS)

    Ohta, Shigemi

    1989-01-01

    The Columbia Parallel Supercomputer project aims at the construction of a parallel processing, multi-gigaflop computer optimized for numerical simulations of lattice QCD. The project has three stages; 16-node, 1/4GF machine completed in April 1985, 64-node, 1GF machine completed in August 1987, and 256-node, 16GF machine now under construction. The machines all share a common architecture; a two dimensional torus formed from a rectangular array of N 1 x N 2 independent and identical processors. A processor is capable of operating in a multi-instruction multi-data mode, except for periods of synchronous interprocessor communication with its four nearest neighbors. Here the thermodynamics simulations on the two working machines are reported. (orig./HSI)

  1. Unique Methodologies for Nano/Micro Manufacturing Job Training Via Desktop Supercomputer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, Clyde [Northern Illinois Univ., DeKalb, IL (United States); Karonis, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Lurio, Laurence [Northern Illinois Univ., DeKalb, IL (United States); Piot, Philippe [Northern Illinois Univ., DeKalb, IL (United States); Xiao, Zhili [Northern Illinois Univ., DeKalb, IL (United States); Glatz, Andreas [Northern Illinois Univ., DeKalb, IL (United States); Pohlman, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Hou, Minmei [Northern Illinois Univ., DeKalb, IL (United States); Demir, Veysel [Northern Illinois Univ., DeKalb, IL (United States); Song, Jie [Northern Illinois Univ., DeKalb, IL (United States); Duffin, Kirk [Northern Illinois Univ., DeKalb, IL (United States); Johns, Mitrick [Northern Illinois Univ., DeKalb, IL (United States); Sims, Thomas [Northern Illinois Univ., DeKalb, IL (United States); Yin, Yanbin [Northern Illinois Univ., DeKalb, IL (United States)

    2012-11-21

    This project establishes an initiative in high speed (Teraflop)/large-memory desktop supercomputing for modeling and simulation of dynamic processes important for energy and industrial applications. It provides a training ground for employment of current students in an emerging field with skills necessary to access the large supercomputing systems now present at DOE laboratories. It also provides a foundation for NIU faculty to quantum leap beyond their current small cluster facilities. The funding extends faculty and student capability to a new level of analytic skills with concomitant publication avenues. The components of the Hewlett Packard computer obtained by the DOE funds create a hybrid combination of a Graphics Processing System (12 GPU/Teraflops) and a Beowulf CPU system (144 CPU), the first expandable via the NIU GAEA system to ~60 Teraflops integrated with a 720 CPU Beowulf system. The software is based on access to the NVIDIA/CUDA library and the ability through MATLAB multiple licenses to create additional local programs. A number of existing programs are being transferred to the CPU Beowulf Cluster. Since the expertise necessary to create the parallel processing applications has recently been obtained at NIU, this effort for software development is in an early stage. The educational program has been initiated via formal tutorials and classroom curricula designed for the coming year. Specifically, the cost focus was on hardware acquisitions and appointment of graduate students for a wide range of applications in engineering, physics and computer science.

  2. Heuristic Scheduling in Grid Environments: Reducing the Operational Energy Demand

    Science.gov (United States)

    Bodenstein, Christian

    In a world where more and more businesses seem to trade in an online market, the supply of online services to the ever-growing demand could quickly reach its capacity limits. Online service providers may find themselves maxed out at peak operation levels during high-traffic timeslots but too little demand during low-traffic timeslots, although the latter is becoming less frequent. At this point deciding which user is allocated what level of service becomes essential. The concept of Grid computing could offer a meaningful alternative to conventional super-computing centres. Not only can Grids reach the same computing speeds as some of the fastest supercomputers, but distributed computing harbors a great energy-saving potential. When scheduling projects in such a Grid environment however, simply assigning one process to a system becomes so complex in calculation that schedules are often too late to execute, rendering their optimizations useless. Current schedulers attempt to maximize the utility, given some sort of constraint, often reverting to heuristics. This optimization often comes at the cost of environmental impact, in this case CO 2 emissions. This work proposes an alternate model of energy efficient scheduling while keeping a respectable amount of economic incentives untouched. Using this model, it is possible to reduce the total energy consumed by a Grid environment using 'just-in-time' flowtime management, paired with ranking nodes by efficiency.

  3. BigData and computing challenges in high energy and nuclear physics

    Science.gov (United States)

    Klimentov, A.; Grigorieva, M.; Kiryanov, A.; Zarochentsev, A.

    2017-06-01

    In this contribution we discuss the various aspects of the computing resource needs experiments in High Energy and Nuclear Physics, in particular at the Large Hadron Collider. This will evolve in the future when moving from LHC to HL-LHC in ten years from now, when the already exascale levels of data we are processing could increase by a further order of magnitude. The distributed computing environment has been a great success and the inclusion of new super-computing facilities, cloud computing and volunteering computing for the future is a big challenge, which we are successfully mastering with a considerable contribution from many super-computing centres around the world, academic and commercial cloud providers. We also discuss R&D computing projects started recently in National Research Center ``Kurchatov Institute''

  4. BigData and computing challenges in high energy and nuclear physics

    International Nuclear Information System (INIS)

    Klimentov, A.; Grigorieva, M.; Kiryanov, A.; Zarochentsev, A.

    2017-01-01

    In this contribution we discuss the various aspects of the computing resource needs experiments in High Energy and Nuclear Physics, in particular at the Large Hadron Collider. This will evolve in the future when moving from LHC to HL-LHC in ten years from now, when the already exascale levels of data we are processing could increase by a further order of magnitude. The distributed computing environment has been a great success and the inclusion of new super-computing facilities, cloud computing and volunteering computing for the future is a big challenge, which we are successfully mastering with a considerable contribution from many super-computing centres around the world, academic and commercial cloud providers. We also discuss R and D computing projects started recently in National Research Center ''Kurchatov Institute''

  5. Computational Solutions for Today’s Navy: New Methods are Being Employed to Meet the Navy’s Changing Software-Development Environment

    Science.gov (United States)

    2008-03-01

    software- development environment. ▶ Frank W. Bentrem, Ph.D., John T. Sample, Ph.D., and Michael M. Harris he Naval Research Labor - atory (NRL) is the...sonars (Through-the-Sensor technology), supercomputer generated numer- ical models, and historical/ clima - tological databases. It uses a vari- ety of

  6. High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers

    Science.gov (United States)

    Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.

    2017-12-01

    The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and

  7. De Novo Ultrascale Atomistic Simulations On High-End Parallel Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, A; Kalia, R K; Nomura, K; Sharma, A; Vashishta, P; Shimojo, F; van Duin, A; Goddard, III, W A; Biswas, R; Srivastava, D; Yang, L H

    2006-09-04

    We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework, high-end chemically reactive and non-reactive molecular dynamics (MD) simulations explore a wide solution space to discover microscopic mechanisms that govern macroscopic material properties, into which highly accurate quantum mechanical (QM) simulations are embedded to validate the discovered mechanisms and quantify the uncertainty of the solution. The framework includes an embedded divide-and-conquer (EDC) algorithmic framework for the design of linear-scaling simulation algorithms with minimal bandwidth complexity and tight error control. The EDC framework also enables adaptive hierarchical simulation with automated model transitioning assisted by graph-based event tracking. A tunable hierarchical cellular decomposition parallelization framework then maps the O(N) EDC algorithms onto Petaflops computers, while achieving performance tunability through a hierarchy of parameterized cell data/computation structures, as well as its implementation using hybrid Grid remote procedure call + message passing + threads programming. High-end computing platforms such as IBM BlueGene/L, SGI Altix 3000 and the NSF TeraGrid provide an excellent test grounds for the framework. On these platforms, we have achieved unprecedented scales of quantum-mechanically accurate and well validated, chemically reactive atomistic simulations--1.06 billion-atom fast reactive force-field MD and 11.8 million-atom (1.04 trillion grid points) quantum-mechanical MD in the framework of the EDC density functional theory on adaptive multigrids--in addition to 134 billion-atom non-reactive space-time multiresolution MD, with the parallel efficiency as high as 0.998 on 65,536 dual-processor BlueGene/L nodes. We have also achieved an automated execution of hierarchical QM

  8. An Optimized Parallel FDTD Topology for Challenging Electromagnetic Simulations on Supercomputers

    Directory of Open Access Journals (Sweden)

    Shugang Jiang

    2015-01-01

    Full Text Available It may not be a challenge to run a Finite-Difference Time-Domain (FDTD code for electromagnetic simulations on a supercomputer with more than 10 thousands of CPU cores; however, to make FDTD code work with the highest efficiency is a challenge. In this paper, the performance of parallel FDTD is optimized through MPI (message passing interface virtual topology, based on which a communication model is established. The general rules of optimal topology are presented according to the model. The performance of the method is tested and analyzed on three high performance computing platforms with different architectures in China. Simulations including an airplane with a 700-wavelength wingspan, and a complex microstrip antenna array with nearly 2000 elements are performed very efficiently using a maximum of 10240 CPU cores.

  9. What about Place? Considering the Role of Physical Environment on Youth Imagining of Future Possible Selves

    Science.gov (United States)

    Prince, Dana

    2013-01-01

    Identity research indicates that development of well elaborated cognitions about oneself in the future, or one's possible selves, is consequential for youths' developmental trajectories, influencing a range of social, health, and educational outcomes. Although the theory of possible selves considers the role of social contexts in identity development, the potential influence of the physical environment is understudied. At the same time, a growing body of work spanning multiple disciplines points to the salience of place, or the meaningful physical environments of people's everyday lives, as an active contributor to self-identity. Bridging these two lines of inquiry, I provide evidence to show how place-based experiences, such as belonging, aversion, and entrapment, may be internalized and encoded into possible selves, thus producing emplaced future self-concept. I suggest that for young people, visioning self in the future is inextricably bound with place; place is an active contributor both in the present development of future self-concept and in enabling young people to envision different future possible places. Implications for practice and future research include place-making interventions and conceptualizing place beyond “neighborhood effects.” PMID:25642137

  10. Development of a high performance eigensolver on the peta-scale next generation supercomputer system

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Yamada, Susumu; Machida, Masahiko

    2010-01-01

    For the present supercomputer systems, a multicore and multisocket processors are necessary to build a system, and choice of interconnection is essential. In addition, for effective development of a new code, high performance, scalable, and reliable numerical software is one of the key items. ScaLAPACK and PETSc are well-known software on distributed memory parallel computer systems. It is needless to say that highly tuned software towards new architecture like many-core processors must be chosen for real computation. In this study, we present a high-performance and high-scalable eigenvalue solver towards the next-generation supercomputer system, so called 'K-computer' system. We have developed two versions, the standard version (eigen s) and enhanced performance version (eigen sx), which are developed on the T2K cluster system housed at University of Tokyo. Eigen s employs the conventional algorithms; Householder tridiagonalization, divide and conquer (DC) algorithm, and Householder back-transformation. They are carefully implemented with blocking technique and flexible two-dimensional data-distribution to reduce the overhead of memory traffic and data transfer, respectively. Eigen s performs excellently on the T2K system with 4096 cores (theoretical peak is 37.6 TFLOPS), and it shows fine performance 3.0 TFLOPS with a two hundred thousand dimensional matrix. The enhanced version, eigen sx, uses more advanced algorithms; the narrow-band reduction algorithm, DC for band matrices, and the block Householder back-transformation with WY-representation. Even though this version is still on a test stage, it shows 4.7 TFLOPS with the same dimensional matrix on eigen s. (author)

  11. Perspectives on Advanced Learning Technologies and Learning Networks and Future Aerospace Workforce Environments

    Science.gov (United States)

    Noor, Ahmed K. (Compiler)

    2003-01-01

    An overview of the advanced learning technologies is given in this presentation along with a brief description of their impact on future aerospace workforce development. The presentation is divided into five parts (see Figure 1). In the first part, a brief historical account of the evolution of learning technologies is given. The second part describes the current learning activities. The third part describes some of the future aerospace systems, as examples of high-tech engineering systems, and lists their enabling technologies. The fourth part focuses on future aerospace research, learning and design environments. The fifth part lists the objectives of the workshop and some of the sources of information on learning technologies and learning networks.

  12. Watson will see you now: a supercomputer to help clinicians make informed treatment decisions.

    Science.gov (United States)

    Doyle-Lindrud, Susan

    2015-02-01

    IBM has collaborated with several cancer care providers to develop and train the IBM supercomputer Watson to help clinicians make informed treatment decisions. When a patient is seen in clinic, the oncologist can input all of the clinical information into the computer system. Watson will then review all of the data and recommend treatment options based on the latest evidence and guidelines. Once the oncologist makes the treatment decision, this information can be sent directly to the insurance company for approval. Watson has the ability to standardize care and accelerate the approval process, a benefit to the healthcare provider and the patient.

  13. Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers

    Science.gov (United States)

    Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi

    2018-03-01

    Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.

  14. Adventures in supercomputing: An innovative program for high school teachers

    Energy Technology Data Exchange (ETDEWEB)

    Oliver, C.E.; Hicks, H.R.; Summers, B.G. [Oak Ridge National Lab., TN (United States); Staten, D.G. [Wartburg Central High School, TN (United States)

    1994-12-31

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology. Adventures in Supercomputing (AiS), sponsored by the U.S. Department of Energy (DOE), is such a program. It is a program for high school teachers that changes the teacher paradigm from a teacher-directed approach of teaching to a student-centered approach. {open_quotes}A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode{close_quotes}. Not only is the process of teaching changed, but the cross-curricula integration within the AiS materials is remarkable. Written from a teacher`s perspective, this paper will describe the AiS program and its effects on teachers and students, primarily at Wartburg Central High School, in Wartburg, Tennessee. The AiS program in Tennessee is sponsored by Oak Ridge National Laboratory (ORNL).

  15. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    Science.gov (United States)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  16. 2009 Community College Futures Assembly Focus: Leading Change--Leading in an Uncertain Environment

    Science.gov (United States)

    Campbell, Dale F.; Morris, Phillip A.

    2009-01-01

    The Community College Futures Assembly has served as a national, independent policy thinktank since 1995. Its purpose is to articulate the critical issues facing American community colleges and recognize innovative programs. Convening annually in January in Orlando, Florida, the Assembly offers a learning environment where tough questions are…

  17. MEDIA ENVIRONMENT AS FACTOR OF REALIZATION OF CREATIVE POTENTIAL OF FUTURE TEACHERS` IN THE MOUNTAIN SCHOOLS OF THE UKRAINIAN CARPATHIANS

    Directory of Open Access Journals (Sweden)

    Alla Lebedieva

    2015-04-01

    Full Text Available The article shows up “media environment” as a factor of future teachers` creative potential realization in the mountainous schools of the Ukrainian Carpathians. The problem of using media environment as a factor of future teachers` creative potential in the mountainous schools of the Ukrainian Carpathians and the ways of its optimization is the main point of this research. Highlights ways to modernize social and professional orientation training of students in the creative process of nature is situates in information education and educational environment of high school. We consider the causal link use media environment as a factor of future teachers` creative potential and complexity of the teacher in the mountainous schools of the Ukrainian Carpathians. The basic function of the media environment are extensity, instrumental, communicative, interactive, multimedia. Reveals some aspects of training students to creatively active teaching process we describe subjects with objective possibilities in the formation of professional skills of future teachers` and which directly affect the realization of creative potential – “Ukrainian folk art”, “Basic recitation and rhetoric”, “The basis of pedagogical creativity”. The necessity of creating a full-fledged media environment in higher education is important condition of successful education as an important factor that allows the efficiency of the creative potential of future teachers` in the mountainous schools of the Ukrainian Carpathians.

  18. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    Science.gov (United States)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  19. Research center Juelich to install Germany's most powerful supercomputer new IBM System for science and research will achieve 5.8 trillion computations per second

    CERN Multimedia

    2002-01-01

    "The Research Center Juelich, Germany, and IBM today announced that they have signed a contract for the delivery and installation of a new IBM supercomputer at the Central Institute for Applied Mathematics" (1/2 page).

  20. Anaesthesia in austere environments: literature review and considerations for future space exploration missions.

    Science.gov (United States)

    Komorowski, Matthieu; Fleming, Sarah; Mawkin, Mala; Hinkelbein, Jochen

    2018-01-01

    Future space exploration missions will take humans far beyond low Earth orbit and require complete crew autonomy. The ability to provide anaesthesia will be important given the expected risk of severe medical events requiring surgery. Knowledge and experience of such procedures during space missions is currently extremely limited. Austere and isolated environments (such as polar bases or submarines) have been used extensively as test beds for spaceflight to probe hazards, train crews, develop clinical protocols and countermeasures for prospective space missions. We have conducted a literature review on anaesthesia in austere environments relevant to distant space missions. In each setting, we assessed how the problems related to the provision of anaesthesia (e.g., medical kit and skills) are dealt with or prepared for. We analysed how these factors could be applied to the unique environment of a space exploration mission. The delivery of anaesthesia will be complicated by many factors including space-induced physiological changes and limitations in skills and equipment. The basic principles of a safe anaesthesia in an austere environment (appropriate training, presence of minimal safety and monitoring equipment, etc.) can be extended to the context of a space exploration mission. Skills redundancy is an important safety factor, and basic competency in anaesthesia should be part of the skillset of several crewmembers. The literature suggests that safe and effective anaesthesia could be achieved by a physician during future space exploration missions. In a life-or-limb situation, non-physicians may be able to conduct anaesthetic procedures, including simplified general anaesthesia.

  1. MEGADOCK 4.0: an ultra-high-performance protein-protein docking software for heterogeneous supercomputers.

    Science.gov (United States)

    Ohue, Masahito; Shimoda, Takehiro; Suzuki, Shuji; Matsuzaki, Yuri; Ishida, Takashi; Akiyama, Yutaka

    2014-11-15

    The application of protein-protein docking in large-scale interactome analysis is a major challenge in structural bioinformatics and requires huge computing resources. In this work, we present MEGADOCK 4.0, an FFT-based docking software that makes extensive use of recent heterogeneous supercomputers and shows powerful, scalable performance of >97% strong scaling. MEGADOCK 4.0 is written in C++ with OpenMPI and NVIDIA CUDA 5.0 (or later) and is freely available to all academic and non-profit users at: http://www.bi.cs.titech.ac.jp/megadock. akiyama@cs.titech.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  2. Future directions in shielding methods and analysis

    International Nuclear Information System (INIS)

    Goldstein, H.

    1987-01-01

    Over the nearly half century history of shielding against reactor radiation, there has been a see-saw battle between theory and measurement. During that period the capability and accuracy of calculational methods have been enormously improved. The microscopic cross sections needed as input to the theoretical computations are now also known to adequate accuracy (with certain exceptions). Nonetheless, there remain substantial classes of shielding problems not yet accessible to satisfactory computational methods, particularly where three-dimensional geometries are involved. This paper discusses promising avenues to approach such problems, especially in the light of recent and expected advances in supercomputers. In particular, it seems that Monte Carlo methods should be much more advantageous in the new computer environment than they have been in the past

  3. Towards a future robotic home environment: a survey.

    Science.gov (United States)

    Güttler, Jörg; Georgoulas, Christos; Linner, Thomas; Bock, Thomas

    2015-01-01

    Demographic change has resulted in an increase of elderly people, while at the same time the number of active working people is falling. In the future, there will be less caretaking, which is necessary to support the aging population. In order to enable the aged population to live in dignity, they should be able to perform activities of daily living (ADLs) as independently as possible. The aim of this paper is to describe several solutions and concepts that can support elderly people in their ADLs in a way that allows them to stay self-sufficient for as long as possible. To reach this goal, the Building Realization and Robotics Lab is researching in the field of ambient assisted living. The idea is to implement robots and sensors in the home environment so as to efficiently support the inhabitants in their ADLs and eventually increase their independence. Through embedding vital sensors into furniture and using ICT technologies, the health status of elderly people can be remotely evaluated by a physician or family members. By investigating ergonomic aspects specific to elderly people (e.g. via an age-simulation suit), it is possible to develop and test new concepts and novel applications, which will offer innovative solutions. Via the introduction of mechatronics and robotics, the home environment can be made able to seamlessly interact with the inhabitant through gestures, vocal commands, and visual recognition algorithms. Meanwhile, several solutions have been developed that address how to build a smart home environment in order to create an ambient assisted environment. This article describes how these concepts were developed. The approach for each concept, proposed in this article, was performed as follows: (1) research of needs, (2) creating definitions of requirements, (3) identification of necessary technology and processes, (4) building initial concepts, (5) experiments in a real environment, and (6) development of the final concepts. To keep these concepts

  4. Exploration and production environment. Preserving the future our responsibility; Exploration et production environnement. Preserver l'avenir: notre responsabilite

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    This document presents the Total Group commitments to manage natural resources in a rational way, to preserve biodiversity for future generations and protect the environment. It contains the health, safety, environment and quality charter of Total, the 12 exploration and production health, safety and environment rules and the exploration and production environmental policy. (A.L.B.)

  5. Wavelet transform-vector quantization compression of supercomputer ocean model simulation output

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J N; Brislawn, C M

    1992-11-12

    We describe a new procedure for efficient compression of digital information for storage and transmission purposes. The algorithm involves a discrete wavelet transform subband decomposition of the data set, followed by vector quantization of the wavelet transform coefficients using application-specific vector quantizers. The new vector quantizer design procedure optimizes the assignment of both memory resources and vector dimensions to the transform subbands by minimizing an exponential rate-distortion functional subject to constraints on both overall bit-rate and encoder complexity. The wavelet-vector quantization method, which originates in digital image compression. is applicable to the compression of other multidimensional data sets possessing some degree of smoothness. In this paper we discuss the use of this technique for compressing the output of supercomputer simulations of global climate models. The data presented here comes from Semtner-Chervin global ocean models run at the National Center for Atmospheric Research and at the Los Alamos Advanced Computing Laboratory.

  6. Algorithms for supercomputers

    International Nuclear Information System (INIS)

    Alder, B.J.

    1986-01-01

    Better numerical procedures, improved computational power and additional physical insights have contributed significantly to progress in dealing with classical and quantum statistical mechanics problems. Past developments are discussed and future possibilities outlined

  7. Algorithms for supercomputers

    International Nuclear Information System (INIS)

    Alder, B.J.

    1985-12-01

    Better numerical procedures, improved computational power and additional physical insights have contributed significantly to progress in dealing with classical and quantum statistical mechanics problems. Past developments are discussed and future possibilities outlined

  8. Debating the future of comfort: environmental sustainability, energy consumption and the indoor environment

    Energy Technology Data Exchange (ETDEWEB)

    Chappells, H.; Shove, E.

    2005-02-01

    Vast quantities of energy are consumed in heating and cooling to provide what are now regarded as acceptable standards of thermal comfort. In the UK as in a number of other countries, there is a real danger that responses in anticipation of global warming and climate change - including growing reliance on air-conditioning - will increase energy demand and CO{sub 2} emissions even further. This is an appropriate moment to reflect on the history and future of comfort, both as an idea and as a material reality. Based on interviews and discussions with UK policy makers and building practitioners involved in specifying and constructing what will become the indoor environments of the future, four possible scenarios are identified each with different implications for energy and resource consumption. By actively promoting debate about the indoor environment and associated ways of life, it may yet be possible to avoid becoming locked into social and technical trajectories that are ultimately unsustainable. The aim of this paper is to inspire and initiate just such a discussion through demonstrating that comfort is a highly negotiable socio-cultural construct. (author)

  9. Computational fluid dynamics: complex flows requiring supercomputers. January 1975-July 1988 (Citations from the INSPEC: Information Services for the Physics and Engineering Communities data base). Report for January 1975-July 1988

    International Nuclear Information System (INIS)

    1988-08-01

    This bibliography contains citations concerning computational fluid dynamics (CFD), a new method in computational science to perform complex flow simulations in three dimensions. Applications include aerodynamic design and analysis for aircraft, rockets, and missiles, and automobiles; heat-transfer studies; and combustion processes. Included are references to supercomputers, array processors, and parallel processors where needed for complete, integrated design. Also included are software packages and grid-generation techniques required to apply CFD numerical solutions. Numerical methods for fluid dynamics, not requiring supercomputers, are found in a separate published search. (Contains 83 citations fully indexed and including a title list.)

  10. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  11. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu

    2013-07-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  12. NORTH-EAST ROMANIA AS A FUTURE SOURCE OF TREES FOR URBAN PAVED ENVIRONMENTS IN NORTH-WEST EUROPE

    Directory of Open Access Journals (Sweden)

    SJÖMAN HENRIK

    2009-12-01

    Full Text Available Trees are an important feature of the urban environment. The problem today lies not in finding a wide range of well-adapted tree species for park environments, but in finding species suitable for urban paved sites. In terms of north-west Europe, it is unlikely that the limited native dendroflora will provide a large variety of tree species with high tolerance to the environmental stresses characterising urban paved sites in the region. However, other regions with a comparable climate but with a rich dendroflora can potentially provide new tree species and genera well-suited to the growing conditions at urban sites in north-west Europe. This paper examines the potential of a geographical area extending over north-east Romania and the Republic of Moldavia to supply suitable tree species for urban paved sites in Central and Northern Europe (CNE. The study involved comparing the temperature, precipitation, evapotranspiration and water runoff in the woodland area of Iasi, Romania, with those the current inner-city climate of Copenhagen, Denmark and those predicted for Copenhagen 2100. The latter included urban heat island effects and predicted global climate change. The results revealed similar pattern in summer water deficit and temperature between natural woodlands in Iasi and inner-city environment of Copenhagen today. On the other hand, there is a weak match between Iasi and the future Copenhagen. In order to match the future scenario of Copenhagen with the present situation in Iasi, a greater understanding in a early phase that the solution not only depends on suitable tree species, but also on technical solutions being developed in order to have trees in paved environments in the future. On the basis of precipitation and temperature data, natural woodlands in north-east Romania have the potential to be a source of suitable trees for urban paved environments in the CNE region, even for a future climate if other aspects in the planning of trees

  13. Massive hybrid parallelism for fully implicit multiphysics

    International Nuclear Information System (INIS)

    Gaston, D. R.; Permann, C. J.; Andrs, D.; Peterson, J. W.

    2013-01-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided. (authors)

  14. Massive hybrid parallelism for fully implicit multiphysics

    Energy Technology Data Exchange (ETDEWEB)

    Gaston, D. R.; Permann, C. J.; Andrs, D.; Peterson, J. W. [Idaho National Laboratory, 2525 N. Fremont Ave., Idaho Falls, ID 83415 (United States)

    2013-07-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided. (authors)

  15. MASSIVE HYBRID PARALLELISM FOR FULLY IMPLICIT MULTIPHYSICS

    Energy Technology Data Exchange (ETDEWEB)

    Cody J. Permann; David Andrs; John W. Peterson; Derek R. Gaston

    2013-05-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided.

  16. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry

    2017-02-27

    than three days. After careful optimization of the finite difference kernel, each gather was computed at 184 gigaflops, on average. Up to 6,103 nodes could be used during the computation, resulting in a peak computation speed greater than 1.11 petaflops. The synthetic seismic data using the planned survey geometry was available one month before the actual acquisition, allowing for early real scale validation of our processing and imaging workflows. Moreover, the availability of a massive supercomputer such as Shaheen II enables fast reverse time migration (RTM) and full waveform inversion, and therefore, a more accurate velocity model estimation for future work.

  17. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry; Etienne, Vincent; Gashawbeza, Ewenet; Curiel, Emesto Sandoval; Khan, Azizur; Feki, Saber; Kortas, Samuel

    2017-01-01

    three days. After careful optimization of the finite difference kernel, each gather was computed at 184 gigaflops, on average. Up to 6,103 nodes could be used during the computation, resulting in a peak computation speed greater than 1.11 petaflops. The synthetic seismic data using the planned survey geometry was available one month before the actual acquisition, allowing for early real scale validation of our processing and imaging workflows. Moreover, the availability of a massive supercomputer such as Shaheen II enables fast reverse time migration (RTM) and full waveform inversion, and therefore, a more accurate velocity model estimation for future work.

  18. Strategic planning for future learning environments: an exploration of interpersonal, interprofessional and political factors.

    Science.gov (United States)

    Schmidt, Cathrine

    2013-09-01

    This article, written from the stance of a public planner and a policy maker, explores the challenges and potential in creating future learning environments through the concept of a new learning landscape. It is based on the belief that physical planning can support the strategic goals of universities. In Denmark, a political focus on education as a mean to improve national capacity for innovation and growth are redefining the universities role in society. This is in turn changing the circumstances for the physical planning. Drawing on examples of physical initiatives in three different scales--city, building and room scale, the paper highlights how space and place matters on an interpersonal, an interprofessional and a political level. The article suggests that a wider understanding of how new learning landscapes are created--both as a material reality and a political discourse--can help frame an emerging community of practice. This involves university leaders, faculty and students, architects, designers and urban planners, citizens and policy makers with the common goal of creating future learning environments today.

  19. A 2-layer and P2P-based architecture on resource location in future grid environment

    International Nuclear Information System (INIS)

    Pei Erming; Sun Gongxin; Zhang Weiyi; Pang Yangguang; Gu Ming; Ma Nan

    2004-01-01

    Grid and Peer-to-Peer computing are two distributed resource sharing environments developing rapidly in recent years. The final objective of Grid, as well as that of P2P technology, is to pool large sets of resources effectively to be used in a more convenient, fast and transparent way. We can speculate that, though many difference exists, Grid and P2P environments will converge into a large scale resource sharing environment that combines the characteristics of the two environments: large diversity, high heterogeneity (of resources), dynamism, and lack of central control. Resource discovery in this future Grid environment is a basic however, important problem. In this article. We propose a two-layer and P2P-based architecture for resource discovery and design a detailed algorithm for resource request propagation in the computing environment discussed above. (authors)

  20. Robots, multi-user virtual environments and healthcare: synergies for future directions.

    Science.gov (United States)

    Moon, Ajung; Grajales, Francisco J; Van der Loos, H F Machiel

    2011-01-01

    The adoption of technology in healthcare over the last twenty years has steadily increased, particularly as it relates to medical robotics and Multi-User Virtual Environments (MUVEs) such as Second Life. Both disciplines have been shown to improve the quality of care and have evolved, for the most part, in isolation from each other. In this paper, we present four synergies between medical robotics and MUVEs that have the potential to decrease resource utilization and improve the quality of healthcare delivery. We conclude with some foreseeable barriers and future research directions for researchers in these fields.

  1. Car2x with software defined networks, network functions virtualization and supercomputers technical and scientific preparations for the Amsterdam Arena telecoms fieldlab

    NARCIS (Netherlands)

    Meijer R.J.; Cushing R.; De Laat C.; Jackson P.; Klous S.; Koning R.; Makkes M.X.; Meerwijk A.

    2015-01-01

    In the invited talk 'Car2x with SDN, NFV and supercomputers' we report about how our past work with SDN [1, 2] allows the design of a smart mobility fieldlab in the huge parking lot the Amsterdam Arena. We explain how we can engineer and test software that handle the complex conditions of the Car2X

  2. Exploration and production environment. Preserving the future our responsibility; Exploration et production environnement. Preserver l'avenir: notre responsabilite

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    This document presents the Total Group commitments to manage natural resources in a rational way, to preserve biodiversity for future generations and protect the environment. It contains the health, safety, environment and quality charter of Total, the 12 exploration and production health, safety and environment rules and the exploration and production environmental policy. (A.L.B.)

  3. Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Vigil,Benny Manuel [Los Alamos National Laboratory; Ballance, Robert [SNL; Haskell, Karen [SNL

    2012-08-09

    Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.

  4. Design Rework Prediction in Concurrent Design Environment: Current Trends and Future Research Directions

    OpenAIRE

    Arundachawat, Panumas; Roy, Rajkumar; Al-Ashaab, Ahmed; Shehab, Essam

    2009-01-01

    Organised by: Cranfield University This paper aims to present state-of-the-art and formulate future research areas on design rework in concurrent design environment. Related literatures are analysed to extract the key factors which impact design rework. Design rework occurs due to changes from upstream design activities and/or by feedbacks from downstream design activities. Design rework is considered as negative iteration; therefore, value in design activities will be increase...

  5. Behaviour and control of radionuclides in the environment: present state of knowledge and future needs

    International Nuclear Information System (INIS)

    Myttenaere, C.

    1983-01-01

    The Radiation Protection Programme of the European Communities is discussed in the context of the behaviour and control of radionuclides in the environment with reference to the aims of the programme, the results of current research activities and requirements for future studies. The summarised results of the radioecological research activities for 1976 - 1980 include the behaviour of α-emitters (Pu, Am, Cm), 99 Tc, 137 Cs, 144 Ce, 106 Ru and 125 Sb in marine environments; atmospheric dispersion of radionuclides; and the transport of radionuclides in components of freshwater and terrestrial ecosystems. (U.K.)

  6. The present and future of microplastic pollution in the marine environment

    International Nuclear Information System (INIS)

    Ivar do Sul, Juliana A.; Costa, Monica F.

    2014-01-01

    Recently, research examining the occurrence of microplastics in the marine environment has substantially increased. Field and laboratory work regularly provide new evidence on the fate of microplastic debris. This debris has been observed within every marine habitat. In this study, at least 101 peer-reviewed papers investigating microplastic pollution were critically analysed (Supplementary material). Microplastics are commonly studied in relation to (1) plankton samples, (2) sandy and muddy sediments, (3) vertebrate and invertebrate ingestion, and (4) chemical pollutant interactions. All of the marine organism groups are at an eminent risk of interacting with microplastics according to the available literature. Dozens of works on other relevant issues (i.e., polymer decay at sea, new sampling and laboratory methods, emerging sources, externalities) were also analysed and discussed. This paper provides the first in-depth exploration of the effects of microplastics on the marine environment and biota. The number of scientific publications will increase in response to present and projected plastic uses and discard patterns. Therefore, new themes and important approaches for future work are proposed. Highlights: • >100 works on microplastic marine pollution were reviewed and discussed. • Microplastics (fibres, fragments, pellets) are widespread in oceans and sediments. • Microplastics interact with POPs and contaminate the marine biota when ingested. • The marine food web might be affected by microplastic biomagnification. • Urgently needed integrated approaches are suggested to different stakeholders. -- Microplastics, which are ubiquitous in marine habitats, affect all facets of the environment and continuously cause unexpected consequences for the environment and its biota

  7. Portable implementation model for CFD simulations. Application to hybrid CPU/GPU supercomputers

    Science.gov (United States)

    Oyarzun, Guillermo; Borrell, Ricard; Gorobets, Andrey; Oliva, Assensi

    2017-10-01

    Nowadays, high performance computing (HPC) systems experience a disruptive moment with a variety of novel architectures and frameworks, without any clarity of which one is going to prevail. In this context, the portability of codes across different architectures is of major importance. This paper presents a portable implementation model based on an algebraic operational approach for direct numerical simulation (DNS) and large eddy simulation (LES) of incompressible turbulent flows using unstructured hybrid meshes. The strategy proposed consists in representing the whole time-integration algorithm using only three basic algebraic operations: sparse matrix-vector product, a linear combination of vectors and dot product. The main idea is based on decomposing the nonlinear operators into a concatenation of two SpMV operations. This provides high modularity and portability. An exhaustive analysis of the proposed implementation for hybrid CPU/GPU supercomputers has been conducted with tests using up to 128 GPUs. The main objective consists in understanding the challenges of implementing CFD codes on new architectures.

  8. Parallel adaptation of a vectorised quantumchemical program system

    International Nuclear Information System (INIS)

    Van Corler, L.C.H.; Van Lenthe, J.H.

    1987-01-01

    Supercomputers, like the CRAY 1 or the Cyber 205, have had, and still have, a marked influence on Quantum Chemistry. Vectorization has led to a considerable increase in the performance of Quantum Chemistry programs. However, clockcycle times more than a factor 10 smaller than those of the present supercomputers are not to be expected. Therefore future supercomputers will have to depend on parallel structures. Recently, the first examples of such supercomputers have been installed. To be prepared for this new generation of (parallel) supercomputers one should consider the concepts one wants to use and the kind of problems one will encounter during implementation of existing vectorized programs on those parallel systems. The authors implemented four important parts of a large quantumchemical program system (ATMOL), i.e. integrals, SCF, 4-index and Direct-CI in the parallel environment at ECSEC (Rome, Italy). This system offers simulated parallellism on the host computer (IBM 4381) and real parallellism on at most 10 attached processors (FPS-164). Quantumchemical programs usually handle large amounts of data and very large, often sparse matrices. The transfer of that many data can cause problems concerning communication and overhead, in view of which shared memory and shared disks must be considered. The strategy and the tools that were used to parallellise the programs are shown. Also, some examples are presented to illustrate effectiveness and performance of the system in Rome for these type of calculations

  9. The Future of the Brigade Combat Team: Air-Ground Integration and the Operating Environment

    Science.gov (United States)

    2017-06-09

    coordinate, and control joint and multinational aircraft during CAS situations in combat and training. The current system which the CAS mission falls...current system , experiences from Vietnam, Operation Desert Storm, Afghanistan and Iraq help to identify future challenges to the operating environment ...multinational partners. 15. SUBJECT TERMS Air Ground Integration, Theater Air Ground System , Theater Air Control System , Army Air Ground System , Joint

  10. A future data environment - reusability vs. citability and synchronisation vs. ingestion

    Science.gov (United States)

    Fleischer, D.

    2012-04-01

    During the last decades data managers dedicated their work to the pursuit for importable data. In the recent years this chase seams to come to an end while funding organisations assume that the approach of data publications with citable data sets will eliminate denial of scientists to commit their data. But is this true for all problems we are facing at the edge of a data avalanche and data intensive science? The concept of citable data is a logical consequence from the connection of points. Potential data providers in the past complained usually about the missing of a credit assignment for data providers and they still do. The selected way of DOI captured data sets is perfectly fitting into the credit system of publisher driven publications with countable citations. This system is well known by scientists for approximately 400 years now. Unfortunately, there is a double bind situation between citeability and reusability. While cooperation of publishers and data archives are coming into existence, it is necessary to get one question clear: "Is it really worth while in the twenty-first century to force data into the publication process of the seventeenth century?" Data publications enable easy citability, but do not support easy data reusability for future users. Additional problems occur in such an environment while taking into account the chances of collaborative data corrections in the institutional repository. The future with huge amounts of data connected with publications makes reconsideration towards a more integrated approach reasonable. In the past data archives were the only infrastructures taking care of long-term data retrievability and availability. Nevertheless, they were never a part of the scientific process from data creation, analysis, interpretation and publication. Data archives were regarded as isolated islands in the sea of scientific data. Accordingly scientists considered data publications like a stumbling stone in their daily routines and

  11. The present and future of microplastic pollution in the marine environment.

    Science.gov (United States)

    Ivar do Sul, Juliana A; Costa, Monica F

    2014-02-01

    Recently, research examining the occurrence of microplastics in the marine environment has substantially increased. Field and laboratory work regularly provide new evidence on the fate of microplastic debris. This debris has been observed within every marine habitat. In this study, at least 101 peer-reviewed papers investigating microplastic pollution were critically analysed (Supplementary material). Microplastics are commonly studied in relation to (1) plankton samples, (2) sandy and muddy sediments, (3) vertebrate and invertebrate ingestion, and (4) chemical pollutant interactions. All of the marine organism groups are at an eminent risk of interacting with microplastics according to the available literature. Dozens of works on other relevant issues (i.e., polymer decay at sea, new sampling and laboratory methods, emerging sources, externalities) were also analysed and discussed. This paper provides the first in-depth exploration of the effects of microplastics on the marine environment and biota. The number of scientific publications will increase in response to present and projected plastic uses and discard patterns. Therefore, new themes and important approaches for future work are proposed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Emerging and Future Computing Paradigms and Their Impact on the Research, Training, and Design Environments of the Aerospace Workforce

    Science.gov (United States)

    Noor, Ahmed K. (Compiler)

    2003-01-01

    The document contains the proceedings of the training workshop on Emerging and Future Computing Paradigms and their impact on the Research, Training and Design Environments of the Aerospace Workforce. The workshop was held at NASA Langley Research Center, Hampton, Virginia, March 18 and 19, 2003. The workshop was jointly sponsored by Old Dominion University and NASA. Workshop attendees came from NASA, other government agencies, industry and universities. The objectives of the workshop were to a) provide broad overviews of the diverse activities related to new computing paradigms, including grid computing, pervasive computing, high-productivity computing, and the IBM-led autonomic computing; and b) identify future directions for research that have high potential for future aerospace workforce environments. The format of the workshop included twenty-one, half-hour overview-type presentations and three exhibits by vendors.

  13. ASPECTS OF PROFESSIONAL DEVELOPMENT OF FUTURE TEACHER OF PHYSICAL CULTURE ARE IN INFORMATIVELY-EDUCATIONAL ENVIRONMENT OF HIGHER EDUCATIONAL ESTABLISHMENT

    OpenAIRE

    Yuriy V. Dragnev

    2011-01-01

    In the article the aspects of professional development of future teacher of physical culture are examined in the informatively-educational environment of higher educational establishment. Importance of introduction of information and telecommunication technologies opens up in the sphere of higher education; the components of informatively-educational environment are given; a concept „Professional development” and „informatively-educational environment opens up”. Specified, that informative su...

  14. STEPS OF THE DESIGN OF CLOUD ORIENTED LEARNING ENVIRONMENT IN THE STUDY OF DATABASES FOR FUTURE TEACHERS OF INFORMATICS

    Directory of Open Access Journals (Sweden)

    Oleksandr M. Kryvonos

    2018-02-01

    Full Text Available The article describes the introduction of cloud services in the educational process of the discipline «Databases» of future teachers of informatics and the design of the cloud oriented learning environment on their basis. An analysis of the domestic experience of forming a cloud oriented learning environment of educational institutions is carried out, given interpretation of concepts «cloud oriented distance learning system», «cloud oriented learning environment in the study of databases», «the design of the cloud oriented learning environment in the study of databases for future teachers of informatics». The following stages of designing COLE are selected and described: targeted, conceptual, meaningful, component, introductory, appraisal-generalization. The structure of the educational interaction of subjects in the study of databases in the conditions of the COLE is developed by the means of the cloud oriented distance learning system Canvas, consisting of communication tools, joint work, and planning of educational events, cloud storages.

  15. The situation and future deployment of the simulation technology relevant to dry type re-processing methods, to an argument sake

    International Nuclear Information System (INIS)

    Kobayashi, Hiroaki

    2004-01-01

    Arithmetic calculation ability of computers has recently made remarkable progress, especially the power will be accelerated by using distributed computing methods. Based on this situation, application of simulation technology to dry type re-processing is firstly presented, the main purpose of which is reduction of experimental cost. Then, what the simulation technologies should be in future age when powerful computer can be used easily is discussed. The discussion is also done for now and for transition period when such a powerful computer age has not come yet. The concept of future computer simulation is argued from points of its purpose, advantage, methods, and applicable technical fields. The arithmetic calculation ability of supercomputer expected in future and distributed computing methods recently getting footlights are viewed showing concrete examples. (A. Hishinuma)

  16. SOFTWARE FOR SUPERCOMPUTER SKIF “ProLit-lC” and “ProNRS-lC” FOR FOUNDRY AND METALLURGICAL PRODUCTIONS

    Directory of Open Access Journals (Sweden)

    A. N. Chichko

    2008-01-01

    Full Text Available The data of modeling on supercomputer system SKIF of technological process of  molds filling by means of computer system 'ProLIT-lc', and also data of modeling of the steel pouring process by means ofTroNRS-lc'are presented. The influence of number of  processors of  multinuclear computer system SKIF on acceleration and time of  modeling of technological processes, connected with production of castings and slugs, is shown.

  17. Research on biomass energy and environment from the past to the future: A bibliometric analysis.

    Science.gov (United States)

    Mao, Guozhu; Huang, Ning; Chen, Lu; Wang, Hongmei

    2018-09-01

    The development and utilization of biomass energy can help to change the ways of energy production and consumption and establish a sustainable energy system that can effectively promote the development of the national economy and strengthen the protection of the environment. Here,we perform a bibliometric analysis of 9514 literature reports in the Web of Science Core Collection searched with the key words "Biomass energy" and "Environment*" date from 1998 to 2017; hot topics in the research and development of biomass energy utilization, as well as the status and development trends of biomass energy utilization and the environment, were analyzed based on content analysis and bibliometrics. The interaction between biomass energy and the environment began to become a major concern as the research progressively deepened. This work is of great significance for the development and utilization of biomass energy to put forward specific suggestions and strategies based on the analysis and demonstration of relationships and interactions between biomass energy utilization and environment. It is also useful to researchers for selecting the future research topics. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Passive BCI in Operational Environments: Insights, Recent Advances, and Future Trends.

    Science.gov (United States)

    Arico, Pietro; Borghini, Gianluca; Di Flumeri, Gianluca; Sciaraffa, Nicolina; Colosimo, Alfredo; Babiloni, Fabio

    2017-07-01

    This minireview aims to highlight recent important aspects to consider and evaluate when passive brain-computer interface (pBCI) systems would be developed and used in operational environments, and remarks future directions of their applications. Electroencephalography (EEG) based pBCI has become an important tool for real-time analysis of brain activity since it could potentially provide covertly-without distracting the user from the main task-and objectively-not affected by the subjective judgment of an observer or the user itself-information about the operator cognitive state. Different examples of pBCI applications in operational environments and new adaptive interface solutions have been presented and described. In addition, a general overview regarding the correct use of machine learning techniques (e.g., which algorithm to use, common pitfalls to avoid, etc.) in the pBCI field has been provided. Despite recent innovations on algorithms and neurotechnology, pBCI systems are not completely ready to enter the market yet, mainly due to limitations of the EEG electrodes technology, and algorithms reliability and capability in real settings. High complexity and safety critical systems (e.g., airplanes, ATM interfaces) should adapt their behaviors and functionality accordingly to the user' actual mental state. Thus, technologies (i.e., pBCIs) able to measure in real time the user's mental states would result very useful in such "high risk" environments to enhance human machine interaction, and so increase the overall safety.

  19. Operating Environment of the Future

    National Research Council Canada - National Science Library

    Hanson, Matthew

    1997-01-01

    ...), the Smart Surgical System (SSS), and the Intelligent Virtual Patient Environment (IVPE). The project is one of several targeting reduction in mortality and morbidity of the wounded soldier through improved far-forward combat casualty care...

  20. Assessment techniques for a learning-centered curriculum: evaluation design for adventures in supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Helland, B. [Ames Lab., IA (United States); Summers, B.G. [Oak Ridge National Lab., TN (United States)

    1996-09-01

    As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. The data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.

  1. The BlueGene/L Supercomputer and Quantum ChromoDynamics

    International Nuclear Information System (INIS)

    Vranas, P; Soltz, R

    2006-01-01

    In summary our update contains: (1) Perfect speedup sustaining 19.3% of peak for the Wilson D D-slash Dirac operator. (2) Measurements of the full Conjugate Gradient (CG) inverter that inverts the Dirac operator. The CG inverter contains two global sums over the entire machine. Nevertheless, our measurements retain perfect speedup scaling demonstrating the robustness of our methods. (3) We ran on the largest BG/L system, the LLNL 64 rack BG/L supercomputer, and obtained a sustained speed of 59.1 TFlops. Furthermore, the speedup scaling of the Dirac operator and of the CG inverter are perfect all the way up to the full size of the machine, 131,072 cores (please see Figure II). The local lattice is rather small (4 x 4 x 4 x 16) while the total lattice has been a lattice QCD vision for thermodynamic studies (a total of 128 x 128 x 256 x 32 lattice sites). This speed is about five times larger compared to the speed we quoted in our submission. As we have pointed out in our paper QCD is notoriously sensitive to network and memory latencies, has a relatively high communication to computation ratio which can not be overlapped in BGL in virtual node mode, and as an application is in a class of its own. The above results are thrilling to us and a 30 year long dream for lattice QCD

  2. Modeling radiative transport in ICF plasmas on an IBM SP2 supercomputer

    International Nuclear Information System (INIS)

    Johansen, J.A.; MacFarlane, J.J.; Moses, G.A.

    1995-01-01

    At the University of Wisconsin-Madison the authors have integrated a collisional-radiative-equilibrium model into their CONRAD radiation-hydrodynamics code. This integrated package allows them to accurately simulate the transport processes involved in ICF plasmas; including the important effects of self-absorption of line-radiation. However, as they increase the amount of atomic structure utilized in their transport models, the computational demands increase nonlinearly. In an attempt to meet this increased computational demand, they have recently embarked on a mission to parallelize the CONRAD program. The parallel CONRAD development is being performed on an IBM SP2 supercomputer. The parallelism is based on a message passing paradigm, and is being implemented using PVM. At the present time they have determined that approximately 70% of the sequential program can be executed in parallel. Accordingly, they expect that the parallel version will yield a speedup on the order of three times that of the sequential version. This translates into only 10 hours of execution time for the parallel version, whereas the sequential version required 30 hours

  3. IQ, the Urban Environment, and Their Impact on Future Schizophrenia Risk in Men.

    Science.gov (United States)

    Toulopoulou, Timothea; Picchioni, Marco; Mortensen, Preben Bo; Petersen, Liselotte

    2017-09-01

    Exposure to an urban environment during early life and low IQ are 2 well-established risk factors for schizophrenia. It is not known, however, how these factors might relate to one another. Data were pooled from the North Jutland regional draft board IQ assessments and the Danish Conscription Registry for men born between 1955 and 1993. Excluding those who were followed up for less than 1 year after the assessment yielded a final cohort of 153170 men of whom 578 later developed a schizophrenia spectrum disorder. We found significant effects of having an urban birth, and also experiencing an increase in urbanicity before the age of 10 years, on adult schizophrenia risk. The effect of urban birth was independent of IQ. However, there was a significant interaction between childhood changes in urbanization in the first 10 years and IQ level on the future adult schizophrenia risk. In short, those subjects who moved to more or less urban areas before their 10th birthday lost the protective effect of IQ. When thinking about adult schizophrenia risk, the critical time window of childhood sensitivity to changes in urbanization seems to be linked to IQ. Given the prediction that by 2050, over 80% of the developed world's population will live in an urban environment, this represents a major future public health issue. © The Author 2017. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  4. Solving sparse linear least squares problems on some supercomputers by using large dense blocks

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Ostromsky, T; Sameh, A

    1997-01-01

    technique is preferable to sparse matrix technique when the matrices are not large, because the high computational speed compensates fully the disadvantages of using more arithmetic operations and more storage. For very large matrices the computations must be organized as a sequence of tasks in each......Efficient subroutines for dense matrix computations have recently been developed and are available on many high-speed computers. On some computers the speed of many dense matrix operations is near to the peak-performance. For sparse matrices storage and operations can be saved by operating only...... and storing only nonzero elements. However, the price is a great degradation of the speed of computations on supercomputers (due to the use of indirect addresses, to the need to insert new nonzeros in the sparse storage scheme, to the lack of data locality, etc.). On many high-speed computers a dense matrix...

  5. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Directory of Open Access Journals (Sweden)

    De K.

    2016-01-01

    Full Text Available The Large Hadron Collider (LHC, operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed AnalysisWorkload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF, is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF’s Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  6. The genesis of neurosurgery and the evolution of the neurosurgical operative environment: part II--concepts for future development, 2003 and beyond.

    Science.gov (United States)

    Liu, Charles Y; Spicer, Mark; Apuzzo, Michael L J

    2003-01-01

    The future development of the neurosurgical operative environment is driven principally by concurrent development in science and technology. In the new millennium, these developments are taking on a Jules Verne quality, with the ability to construct and manipulate the human organism and its surroundings at the level of atoms and molecules seemingly at hand. Thus, an examination of currents in technology advancement from the neurosurgical perspective can provide insight into the evolution of the neurosurgical operative environment. In the future, the optimal design solution for the operative environment requirements of specialized neurosurgery may take the form of composites of venues that are currently mutually distinct. Advances in microfabrication technology and laser optical manipulators are expanding the scope and role of robotics, with novel opportunities for bionic integration. Assimilation of biosensor technology into the operative environment promises to provide neurosurgeons of the future with a vastly expanded set of physiological data, which will require concurrent simplification and optimization of analysis and presentation schemes to facilitate practical usefulness. Nanotechnology derivatives are shattering the maximum limits of resolution and magnification allowed by conventional microscopes. Furthermore, quantum computing and molecular electronics promise to greatly enhance computational power, allowing the emerging reality of simulation and virtual neurosurgery for rehearsal and training purposes. Progressive minimalism is evident throughout, leading ultimately to a paradigm shift as the nanoscale is approached. At the interface between the old and new technological paradigms, issues related to integration may dictate the ultimate emergence of the products of the new paradigm. Once initiated, however, history suggests that the process of change will proceed rapidly and dramatically, with the ultimate neurosurgical operative environment of the future

  7. The radioactive risk - the future of radionuclides in the environment and their impacts on health

    International Nuclear Information System (INIS)

    Amiard, Jean-Claude

    2013-01-01

    This document contains a brief presentation and the table of contents of a book in which the author proposes a large synthesis of present knowledge on main radioactive pollutants (uranium, transuranic elements, caesium, strontium, iodine, tritium, carbon radioactive isotopes, and so on), their behaviour and their future in the various physical components of the environment and living organisms (including mankind). He presents the fundamentals of nuclear physics and chemistry, as well as their applications in different fields (military, energy, medicine, industry, etc.). He also addresses the important ecological and genetic notions, and recalls the anthropogenic origins of radionuclides in the environment: principles of radio-ecology, main radioactive risks, main drawbacks of the use of nuclear energy (wastes and their management), and nuclear accidents and their impact

  8. Performance Evaluation of an Intel Haswell- and Ivy Bridge-Based Supercomputer Using Scientific and Engineering Applications

    Science.gov (United States)

    Saini, Subhash; Hood, Robert T.; Chang, Johnny; Baron, John

    2016-01-01

    We present a performance evaluation conducted on a production supercomputer of the Intel Xeon Processor E5- 2680v3, a twelve-core implementation of the fourth-generation Haswell architecture, and compare it with Intel Xeon Processor E5-2680v2, an Ivy Bridge implementation of the third-generation Sandy Bridge architecture. Several new architectural features have been incorporated in Haswell including improvements in all levels of the memory hierarchy as well as improvements to vector instructions and power management. We critically evaluate these new features of Haswell and compare with Ivy Bridge using several low-level benchmarks including subset of HPCC, HPCG and four full-scale scientific and engineering applications. We also present a model to predict the performance of HPCG and Cart3D within 5%, and Overflow within 10% accuracy.

  9. Computational mechanics - Advances and trends; Proceedings of the Session - Future directions of Computational Mechanics of the ASME Winter Annual Meeting, Anaheim, CA, Dec. 7-12, 1986

    Science.gov (United States)

    Noor, Ahmed K. (Editor)

    1986-01-01

    The papers contained in this volume provide an overview of the advances made in a number of aspects of computational mechanics, identify some of the anticipated industry needs in this area, discuss the opportunities provided by new hardware and parallel algorithms, and outline some of the current government programs in computational mechanics. Papers are included on advances and trends in parallel algorithms, supercomputers for engineering analysis, material modeling in nonlinear finite-element analysis, the Navier-Stokes computer, and future finite-element software systems.

  10. Planning, Implementation and Optimization of Future space Missions using an Immersive Visualization Environement (IVE) Machine

    Science.gov (United States)

    Harris, E.

    Planning, Implementation and Optimization of Future Space Missions using an Immersive Visualization Environment (IVE) Machine E. N. Harris, Lockheed Martin Space Systems, Denver, CO and George.W. Morgenthaler, U. of Colorado at Boulder History: A team of 3-D engineering visualization experts at the Lockheed Martin Space Systems Company have developed innovative virtual prototyping simulation solutions for ground processing and real-time visualization of design and planning of aerospace missions over the past 6 years. At the University of Colorado, a team of 3-D visualization experts are developing the science of 3-D visualization and immersive visualization at the newly founded BP Center for Visualization, which began operations in October, 2001. (See IAF/IAA-01-13.2.09, "The Use of 3-D Immersive Visualization Environments (IVEs) to Plan Space Missions," G. A. Dorn and G. W. Morgenthaler.) Progressing from Today's 3-D Engineering Simulations to Tomorrow's 3-D IVE Mission Planning, Simulation and Optimization Techniques: 3-D (IVEs) and visualization simulation tools can be combined for efficient planning and design engineering of future aerospace exploration and commercial missions. This technology is currently being developed and will be demonstrated by Lockheed Martin in the (IVE) at the BP Center using virtual simulation for clearance checks, collision detection, ergonomics and reach-ability analyses to develop fabrication and processing flows for spacecraft and launch vehicle ground support operations and to optimize mission architecture and vehicle design subject to realistic constraints. Demonstrations: Immediate aerospace applications to be demonstrated include developing streamlined processing flows for Reusable Space Transportation Systems and Atlas Launch Vehicle operations and Mars Polar Lander visual work instructions. Long-range goals include future international human and robotic space exploration missions such as the development of a Mars

  11. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Swaminarayan, Sriram [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Kadau, Kai [Los Alamos National Laboratory; Fossum, Gordon C [IBM CORPORATION

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  12. Environment, energy, economy. A sustainable future

    International Nuclear Information System (INIS)

    Luise, A.; Borrello, L.; Calef, D.; Cialani, C.; Di Majo, V.; Federio, A.; Lovisolo, G.; Musmeci, F.

    1998-01-01

    This paper is organized in five parts: 1. sustainable development from global point of view; 2. global problems and international instruments; 3. sustainable management of resources in economic systems; 4. forecasting and methods: models and index; 5. future urban areas [it

  13. Identification of glacial meltwater runoff in a karstic environment and its implication for present and future water availability

    Directory of Open Access Journals (Sweden)

    D. Finger

    2013-08-01

    Full Text Available Glaciers all over the world are expected to continue to retreat due to the global warming throughout the 21st century. Consequently, future seasonal water availability might become scarce once glacier areas have declined below a certain threshold affecting future water management strategies. Particular attention should be paid to glaciers located in a karstic environment, as parts of the meltwater can be drained by underlying karst systems, making it difficult to assess water availability. In this study tracer experiments, karst modeling and glacier melt modeling are combined in order to identify flow paths in a high alpine, glacierized, karstic environment (Glacier de la Plaine Morte, Switzerland and to investigate current and predict future downstream water availability. Flow paths through the karst underground were determined with natural and fluorescent tracers. Subsequently, geologic information and the findings from tracer experiments were assembled in a karst model. Finally, glacier melt projections driven with a climate scenario were performed to discuss future water availability in the area surrounding the glacier. The results suggest that during late summer glacier meltwater is rapidly drained through well-developed channels at the glacier bottom to the north of the glacier, while during low flow season meltwater enters into the karst and is drained to the south. Climate change projections with the glacier melt model reveal that by the end of the century glacier melt will be significantly reduced in the summer, jeopardizing water availability in glacier-fed karst springs.

  14. What are the factors that could influence the future of work with regard to energy systems and the built environment?

    International Nuclear Information System (INIS)

    Pratt, Andy C.

    2008-01-01

    The aim of this paper is to examine which factors in energy systems and the built environment could influence the future of work. In addition, it looks at trends in relation to corporate demands for space and its specifications, and considers what the scope is for integrating business and industry within the dwelling landscape. It seeks to consider these questions on a 50-year time horizon. The paper begins by discussing the challenge of prediction of future trends, especially in a field apparently so reliant upon technological change and innovation. Because of these problems, the paper concerns itself not with picking technologies but rather with questions about the social adoption of technologies and their applications. It highlights a spectrum of coordinating mechanisms in society that are likely to be critical in shaping the future implications of built environment forms and the consequential use of energy. The scenarios discussed arise from the intersection of two tendencies: concentration versus dispersal, and local versus globally focused growth of city regions. The challenges identified in this report are associated with 'lock-in' to past governance modes of the built environment, exacerbated by rapidly changing demand structures. Demand is not simply changing in volume but also in character. The shifts that will need to be dealt with concern a fundamental issue: how activities are coordinated in society

  15. A criticality safety analysis code using a vectorized Monte Carlo method on the HITAC S-810 supercomputer

    International Nuclear Information System (INIS)

    Morimoto, Y.; Maruyama, H.

    1987-01-01

    A vectorized Monte Carlo criticality safety analysis code has been developed on the vector supercomputer HITAC S-810. In this code, a multi-particle tracking algorithm was adopted for effective utilization of the vector processor. A flight analysis with pseudo-scattering was developed to reduce the computational time needed for flight analysis, which represents the bulk of computational time. This new algorithm realized a speed-up of factor 1.5 over the conventional flight analysis. The code also adopted the multigroup cross section constants library of the Bodarenko type with 190 groups, with 132 groups being for fast and epithermal regions and 58 groups being for the thermal region. Evaluation work showed that this code reproduce the experimental results to an accuracy of about 1 % for the effective neutron multiplication factor. (author)

  16. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  17. Securing a better future for all: Nuclear techniques for global development and environmental protection. NA factsheet on environment laboratories: Protecting the environment

    International Nuclear Information System (INIS)

    2012-01-01

    According to the Millennium Development Goals, managing the environment is considered an integral part of the global development process. The main purpose of the IAEA's environment laboratories is to provide Member States with reliable information on environmental issues and facilitate decision making on protection of the environment. An increasingly important feature of this work is to assess the impact of climate change on environmental sustainability and natural resources. The IAEA's environment laboratories use nuclear techniques, radionuclides, isotopic tracers and stable isotopes to gain a better understanding of the various marine processes, including locating the sources of pollutants and their fate, their transport pathways and their ultimate accumulation in sediments. Radioisotopes are also used to study bioaccumulation in organisms and the food chain, as well as to track signals of climate change throughout history. Natural and artificial radionuclides are used to track ocean currents in key regions. They are also used to validate models designed to predict the future impact of climate change and ocean acidification. The laboratories study the fate and impact of contamination on a variety of ecosystems in order to provide effective preventative diagnostic and remediation strategies. They enhance the capability of Member States to use nuclear techniques to understand and assess changes in their own terrestrial and atmospheric environments, and adopt suitable and sustainable remediation measures when needed. Since 1995, the IAEA environment laboratories have coordinated the international network of Analytical Laboratories for the Measurement of Environmental Radioactivity, providing accurate analysis in the event of an accident or an intentional release of radioactivity. In addition, the laboratories work alongside other organizations, such as UNESCO, the IOC, UNEP and the EC. The laboratories collaborate with Member States through direct involvement with

  18. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone; Manzano Franco, Joseph B.

    2012-12-31

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, we introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.

  19. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan; Mills, Richard T.

    2012-04-18

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors per realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.

  20. Energy-water-environment nexus underpinning future desalination sustainability

    KAUST Repository

    Shahzad, Muhammad Wakil; Burhan, Muhammad; Ang, Li; Ng, Kim Choon

    2017-01-01

    Energy-water-environment nexus is very important to attain COP21 goal, maintaining environment temperature increase below 2°C, but unfortunately two third share of CO2 emission has already been used and the remaining will be exhausted by 2050. A

  1. High-resolution RCMs as pioneers for future GCMs

    Science.gov (United States)

    Schar, C.; Ban, N.; Arteaga, A.; Charpilloz, C.; Di Girolamo, S.; Fuhrer, O.; Hoefler, T.; Leutwyler, D.; Lüthi, D.; Piaget, N.; Ruedisuehli, S.; Schlemmer, L.; Schulthess, T. C.; Wernli, H.

    2017-12-01

    Currently large efforts are underway to refine the horizontal resolution of global and regional climate models to O(1 km), with the intent to represent convective clouds explicitly rather than using semi-empirical parameterizations. This refinement will move the governing equations closer to first principles and is expected to reduce the uncertainties of climate models. High resolution is particularly attractive in order to better represent critical cloud feedback processes (e.g. related to global climate sensitivity and extratropical summer convection) and extreme events (such as heavy precipitation events, floods, and hurricanes). The presentation will be illustrated using decade-long simulations at 2 km horizontal grid spacing, some of these covering the European continent on a computational mesh with 1536x1536x60 grid points. To accomplish such simulations, use is made of emerging heterogeneous supercomputing architectures, using a version of the COSMO limited-area weather and climate model that is able to run entirely on GPUs. Results show that kilometer-scale resolution dramatically improves the simulation of precipitation in terms of the diurnal cycle and short-term extremes. The modeling framework is used to address changes of precipitation scaling with climate change. It is argued that already today, modern supercomputers would in principle enable global atmospheric convection-resolving climate simulations, provided appropriately refactored codes were available, and provided solutions were found to cope with the rapidly growing output volume. A discussion will be provided of key challenges affecting the design of future high-resolution climate models. It is suggested that km-scale RCMs should be exploited to pioneer this terrain, at a time when GCMs are not yet available at such resolutions. Areas of interest include the development of new parameterization schemes adequate for km-scale resolution, the exploration of new validation methodologies and data

  2. Information Environment is an Integral Element of Informational Space in the Process of Professional Development of Future Teacher of Physical Culture

    Directory of Open Access Journals (Sweden)

    Yuri V. Dragnev

    2012-04-01

    Full Text Available The article examines information environment as an integral element of information space in the process of professional development of future teacher of physical culture, notes that the strategic objective of the system of higher education is training of competent future teacher of physical culture in the field of information technologies, when information competence and information culture are major components of professionalism in modern information-oriented society

  3. Robotizing workforce in future built environments

    NARCIS (Netherlands)

    Maas, G.J.; Gassel, van F.J.M.; Lee, Junbok; Han, Chang-Soo

    2011-01-01

    The aim of this paper is to define challenges for Automation and Robotics in construction (A+R) to enhance client and social value. Construction contributes to a positive living environment for society and is the largest sector of Europe’s economy with a size of around 2,500 billion Euros. Ten

  4. The Future Security Environment: Why the U.S. Army Must Differentiate and Grow Millennial Officer Talent

    Science.gov (United States)

    2015-09-01

    and M. Epstein, “ Millennials and the World of Work: An Organizational and Management Perspective,” Journal of Business and Psychology, Vol. 25, 2010...Why the U.S. Army Must Differentiate and Grow Millennial Officer Talent FOR THIS AND OTHER PUBLICATIONS, VISIT US AT http://www.carlisle.army.mil...SUBTITLE The Future Security Environment: Why the U.S. Army Must Differentiate and Grow Millennial Officer Talent 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  5. Predicting the future impact of droughts on ungulate populations in arid and semi-arid environments.

    Directory of Open Access Journals (Sweden)

    Clare Duncan

    Full Text Available Droughts can have a severe impact on the dynamics of animal populations, particularly in semi-arid and arid environments where herbivore populations are strongly limited by resource availability. Increased drought intensity under projected climate change scenarios can be expected to reduce the viability of such populations, yet this impact has seldom been quantified. In this study, we aim to fill this gap and assess how the predicted worsening of droughts over the 21(st century is likely to impact the population dynamics of twelve ungulate species occurring in arid and semi-arid habitats. Our results provide support to the hypotheses that more sedentary, grazing and mixed feeding species will be put at high risk from future increases in drought intensity, suggesting that management intervention under these conditions should be targeted towards species possessing these traits. Predictive population models for all sedentary, grazing or mixed feeding species in our study show that their probability of extinction dramatically increases under future emissions scenarios, and that this extinction risk is greater for smaller populations than larger ones. Our study highlights the importance of quantifying the current and future impacts of increasing extreme natural events on populations and species in order to improve our ability to mitigate predicted biodiversity loss under climate change.

  6. China Debates the Future Security Environment

    Science.gov (United States)

    2000-01-01

    Bike, Zhot~uo da qushi (China megatrends )( Belling: Hualing chubanshe, 1996). For warnings on the need to conceal increasing national power, see Ma...became Japan’s prime minister in 1957. Troop 731, which had engaged in biological warfare experiments, was exempted from trial. In March 1950, all...gongye chubanshe, 1998. Lu Hui. He hua sheng wuqi de h’shi yu weilai CFhe history and future of nuclear, chemical, and biological weapons). Beijing

  7. Radioactivity in the aquatic environment. A review of UK research 1994-1997 and recommendations for future work

    International Nuclear Information System (INIS)

    1998-07-01

    The national Radioactivity Research and Environmental Monitoring Committee (RADREM) provides a forum for liaison on UK research and monitoring in the radioactive substances and radioactive waste management fields. The committee aims to ensure that there is no unnecessary overlap between, or significant omission from, the research programmes of the various parts of Government, the regulatory bodies or industry. This report has been produced by the Aquatic Environment Sub-Committee (AESC) of RADREM. AESC is responsible for providing RADREM with scientific advice in the field of research relating to radionuclides in the aquatic environment, for reporting on the progress of research in this field and on future research requirements. The objectives of this report are presented in Section 2, and the membership of AESC given in Section 3. This report describes a review of research undertaken in the field of radioactivity in aquatic systems over the last three years (Section 4). The review updates previous reviews, the most recent of which being in 1993 (AESC, 1994). Future research requirements have been identified by AESC, considering past work and work in progress, and are presented in Section 5. Specific research requirements are discussed in Section 5, whilst Section 6 summarises the main areas where future research is identified as a priority. These areas are as follows: the movement and uptake of 99 Tc and 14 C in aquatic systems and biota; geochemical processes; off-shore sediments; non-equilibrium systems; radiation exposure during civil engineering works; further work on movement of radionuclides in salt marshes; development and validation of models. The specific objectives of this report are as follows: 1. To provide a summary of research undertaken in this field over the last three years. 2. To identify future research requirements. 3. To attach priorities to the future research requirements. It should be noted that the purpose of the report is to identify

  8. Study on the climate system and mass transport by a climate model

    International Nuclear Information System (INIS)

    Numaguti, A.; Sugata, S.; Takahashi, M.; Nakajima, T.; Sumi, A.

    1997-01-01

    The Center for Global Environmental Research (CGER), an organ of the National Institute for Environmental Studies of the Environment Agency of Japan, was established in October 1990 to contribute broadly to the scientific understanding of global change, and to the elucidation of and solution for our pressing environmental problems. CGER conducts environmental research from interdisciplinary, multiagency, and international perspective, provides research support facilities such as a supercomputer and databases, and offers its own data from long-term monitoring of the global environment. In March 1992, CGER installed a supercomputer system (NEC SX-3, Model 14) to facilitate research on global change. The system is open to environmental researchers worldwide. Proposed research programs are evaluated by the Supercomputer Steering Committee which consists of leading scientists in climate modeling, atmospheric chemistry, oceanic circulation, and computer science. After project approval, authorization for system usage is provided. In 1995 and 1996, several research proposals were designated as priority research and allocated larger shares of computer resources. The CGER supercomputer monograph report Vol. 3 is a report of priority research of CGER's supercomputer. The report covers the description of CCSR-NIES atmospheric general circulation model, lagragian general circulation based on the time-scale of particle motion, and ability of the CCSR-NIES atmospheric general circulation model in the stratosphere. The results obtained from these three studies are described in three chapters. We hope this report provides you with useful information on the global environmental research conducted on our supercomputer

  9. Gene x environment interactions in conduct disorder: Implications for future treatments.

    Science.gov (United States)

    Holz, Nathalie E; Zohsel, Katrin; Laucht, Manfred; Banaschewski, Tobias; Hohmann, Sarah; Brandeis, Daniel

    2016-08-18

    Conduct disorder (CD) causes high financial and social costs, not only in affected families but across society, with only moderately effective treatments so far. There is consensus that CD is likely caused by the convergence of many different factors, including genetic and adverse environmental factors. There is ample evidence of gene-environment interactions in the etiology of CD on a behavioral level regarding genetically sensitive designs and candidate gene-driven approaches, most prominently and consistently represented by MAOA. However, conclusive indications of causal GxE patterns are largely lacking. Inconsistent findings, lack of replication and methodological limitations remain a major challenge. Likewise, research addressing the identification of affected brain pathways which reflect plausible biological mechanisms underlying GxE is still very sparse. Future research will have to take multilevel approaches into account, which combine genetic, environmental, epigenetic, personality, neural and hormone perspectives. A better understanding of relevant GxE patterns in the etiology of CD might enable researchers to design customized treatment options (e.g. biofeedback interventions) for specific subgroups of patients. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. New computing systems, future computing environment, and their implications on structural analysis and design

    Science.gov (United States)

    Noor, Ahmed K.; Housner, Jerrold M.

    1993-01-01

    Recent advances in computer technology that are likely to impact structural analysis and design of flight vehicles are reviewed. A brief summary is given of the advances in microelectronics, networking technologies, and in the user-interface hardware and software. The major features of new and projected computing systems, including high performance computers, parallel processing machines, and small systems, are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed. The impact of the advances in computer technology on structural analysis and the design of flight vehicles is described. A scenario for future computing paradigms is presented, and the near-term needs in the computational structures area are outlined.

  11. Japan Environment and Children's Study: backgrounds, activities, and future directions in global perspectives.

    Science.gov (United States)

    Ishitsuka, Kazue; Nakayama, Shoji F; Kishi, Reiko; Mori, Chisato; Yamagata, Zentaro; Ohya, Yukihiro; Kawamoto, Toshihiro; Kamijima, Michihiro

    2017-07-14

    There is worldwide concern about the effects of environmental factors on children's health and development. The Miami Declaration was signed at the G8 Environment Ministers Meeting in 1997 to promote children's environmental health research. The following ministerial meetings continued to emphasize the need to foster children's research. In response to such a worldwide movement, the Ministry of the Environment, Japan (MOE), launched a nationwide birth cohort study with 100,000 pairs of mothers and children, namely, the Japan Environment and Children's Study (JECS), in 2010. Other countries have also started or planned large-scale studies focusing on children's environmental health issues. The MOE initiated dialogue among those countries and groups to discuss and share the various processes, protocols, knowledge, and techniques for future harmonization and data pooling among such studies. The MOE formed the JECS International Liaison Committee in 2011, which plays a primary role in promoting the international collaboration between JECS and the other children's environmental health research projects and partnership with other countries. This review article aims to present activities that JECS has developed. As one of the committee's activities, a workshop and four international symposia were held between 2011 and 2015 in Japan. In these conferences, international researchers and government officials, including those from the World Health Organization, have made presentations on their own birth cohort studies and health policies. In 2015, the MOE hosted the International Advisory Board meeting and received constructive comments and recommendations from the board. JECS is a founding member of the Environment and Child Health International Birth Cohort Group, and has discussed harmonization of exposure and outcome measurements with member parties, which will make it possible to compare and further combine data from different studies, considering the diversity in the

  12. Evaluating the networking characteristics of the Cray XC-40 Intel Knights Landing-based Cori supercomputer at NERSC

    Energy Technology Data Exchange (ETDEWEB)

    Doerfler, Douglas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Austin, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cook, Brandon [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kandalla, Krishna [Cray Inc, Bloomington, MN (United States); Mendygral, Peter [Cray Inc, Bloomington, MN (United States)

    2017-09-12

    There are many potential issues associated with deploying the Intel Xeon Phi™ (code named Knights Landing [KNL]) manycore processor in a large-scale supercomputer. One in particular is the ability to fully utilize the high-speed communications network, given that the serial performance of a Xeon Phi TM core is a fraction of a Xeon®core. In this paper, we take a look at the trade-offs associated with allocating enough cores to fully utilize the Aries high-speed network versus cores dedicated to computation, e.g., the trade-off between MPI and OpenMP. In addition, we evaluate new features of Cray MPI in support of KNL, such as internode optimizations. We also evaluate one-sided programming models such as Unified Parallel C. We quantify the impact of the above trade-offs and features using a suite of National Energy Research Scientific Computing Center applications.

  13. Clinical experimentation with aerosol antibiotics: current and future methods of administration

    Directory of Open Access Journals (Sweden)

    Zarogoulidis P

    2013-10-01

    Full Text Available Paul Zarogoulidis,1,2 Ioannis Kioumis,1 Konstantinos Porpodis,1 Dionysios Spyratos,1 Kosmas Tsakiridis,3 Haidong Huang,4 Qiang Li,4 J Francis Turner,5 Robert Browning,6 Wolfgang Hohenforst-Schmidt,7 Konstantinos Zarogoulidis1 1Pulmonary Department, G Papanikolaou General Hospital, Aristotle University of Thessaloniki, Thessaloniki, Greece; 2Department of Interventional Pneumology, Ruhrlandklinik, West German Lung Center, University Hospital, University Duisburg-Essen, Essen, Germany; 3Cardiothoracic Surgery Department, Saint Luke Private Hospital of Health Excellence, Thessaloniki, Greece; 4Department of Respiratory Diseases, Shanghai Hospital/First Affiliated Hospital of the Second Military Medical University, Shanghai, People’s Republic of China; 5Pulmonary Medicine, University of Nevada School of Medicine, National Supercomputing Center for Energy and the Environment University of Nevada, Las Vegas, NV, USA; 6Pulmonary and Critical Care Medicine, Interventional Pulmonology, National Naval Medical Center, Walter Reed Army Medical Center, Bethesda, MD, USA; 7II Medical Department, Regional Clinic of Coburg, University of Wuerzburg, Coburg, Germany Abstract: Currently almost all antibiotics are administered by the intravenous route. Since several systems and situations require more efficient methods of administration, investigation and experimentation in drug design has produced local treatment modalities. Administration of antibiotics in aerosol form is one of the treatment methods of increasing interest. As the field of drug nanotechnology grows, new molecules have been produced and combined with aerosol production systems. In the current review, we discuss the efficiency of aerosol antibiotic studies along with aerosol production systems. The different parts of the aerosol antibiotic methodology are presented. Additionally, information regarding the drug molecules used is presented and future applications of this method are discussed

  14. PSYCHOLOGICAL STRATEGY OF COOPERATION, MOTIVATIONAL, INFORMATION AND TECHNOLOGICAL COMPONENTS OF FUTURE HUMANITARIAN TEACHER READINESS FOR PROFESSIONAL ACTIVITY IN POLYSUBJECTIVE LEARNING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Y. Spivakovska

    2014-04-01

    Full Text Available Redefining of modern information and communication technologies (ICT from teaching aids to teaching process subjects, continuous growth of their subjectivity necessary demands appropriate knowledge, skills, appropriate attitude to didactic capabilities of ICT, ability to cooperate with them and to build pupils learning activity aimed at formation and development of self organization, self development skills, promoting their subjective position in getting education that will be readiness of modern teacher to organize effective professional activities in polysubjective learning environment (PLE. The new tasks of humanitarian teacher related to self selection and design of educational content as well as the modeling of the learning process in conditions of PLE virtualized alternatives choice, impose special requirements to professionally important teacher’s personality qualities, rather to his readiness to implement effective professional work in such conditions. In this article the essence of future humanitarian teacher readiness concept to professional activity in polysubjective educational environment is proved. The structure of the readiness is analyzed. Psychological strategy of cooperation, reflective, motivational and informational partials are substantiated and characterized as components of the future humanitarian teacher readiness to professional activities in polysubjective educational environment.

  15. Plastics, the environment and human health: current consensus and future trends.

    Science.gov (United States)

    Thompson, Richard C; Moore, Charles J; vom Saal, Frederick S; Swan, Shanna H

    2009-07-27

    Plastics have transformed everyday life; usage is increasing and annual production is likely to exceed 300 million tonnes by 2010. In this concluding paper to the Theme Issue on Plastics, the Environment and Human Health, we synthesize current understanding of the benefits and concerns surrounding the use of plastics and look to future priorities, challenges and opportunities. It is evident that plastics bring many societal benefits and offer future technological and medical advances. However, concerns about usage and disposal are diverse and include accumulation of waste in landfills and in natural habitats, physical problems for wildlife resulting from ingestion or entanglement in plastic, the leaching of chemicals from plastic products and the potential for plastics to transfer chemicals to wildlife and humans. However, perhaps the most important overriding concern, which is implicit throughout this volume, is that our current usage is not sustainable. Around 4 per cent of world oil production is used as a feedstock to make plastics and a similar amount is used as energy in the process. Yet over a third of current production is used to make items of packaging, which are then rapidly discarded. Given our declining reserves of fossil fuels, and finite capacity for disposal of waste to landfill, this linear use of hydrocarbons, via packaging and other short-lived applications of plastic, is simply not sustainable. There are solutions, including material reduction, design for end-of-life recyclability, increased recycling capacity, development of bio-based feedstocks, strategies to reduce littering, the application of green chemistry life-cycle analyses and revised risk assessment approaches. Such measures will be most effective through the combined actions of the public, industry, scientists and policymakers. There is some urgency, as the quantity of plastics produced in the first 10 years of the current century is likely to approach the quantity produced in the

  16. Plastics, the environment and human health: current consensus and future trends

    Science.gov (United States)

    Thompson, Richard C.; Moore, Charles J.; vom Saal, Frederick S.; Swan, Shanna H.

    2009-01-01

    Plastics have transformed everyday life; usage is increasing and annual production is likely to exceed 300 million tonnes by 2010. In this concluding paper to the Theme Issue on Plastics, the Environment and Human Health, we synthesize current understanding of the benefits and concerns surrounding the use of plastics and look to future priorities, challenges and opportunities. It is evident that plastics bring many societal benefits and offer future technological and medical advances. However, concerns about usage and disposal are diverse and include accumulation of waste in landfills and in natural habitats, physical problems for wildlife resulting from ingestion or entanglement in plastic, the leaching of chemicals from plastic products and the potential for plastics to transfer chemicals to wildlife and humans. However, perhaps the most important overriding concern, which is implicit throughout this volume, is that our current usage is not sustainable. Around 4 per cent of world oil production is used as a feedstock to make plastics and a similar amount is used as energy in the process. Yet over a third of current production is used to make items of packaging, which are then rapidly discarded. Given our declining reserves of fossil fuels, and finite capacity for disposal of waste to landfill, this linear use of hydrocarbons, via packaging and other short-lived applications of plastic, is simply not sustainable. There are solutions, including material reduction, design for end-of-life recyclability, increased recycling capacity, development of bio-based feedstocks, strategies to reduce littering, the application of green chemistry life-cycle analyses and revised risk assessment approaches. Such measures will be most effective through the combined actions of the public, industry, scientists and policymakers. There is some urgency, as the quantity of plastics produced in the first 10 years of the current century is likely to approach the quantity produced in the

  17. Protecting the environment for future generations. Principles and actors in international environmental law

    Energy Technology Data Exchange (ETDEWEB)

    Proelss, Alexander (ed.) [Trier Univ. (Germany). Inst. of Environmental and Technology Law

    2017-08-01

    This book compiles the written versions of presentations held at the occasion of an international symposium entitled ''Protecting the Environment for Future Generations - Principles and Actors in International Environmental Law''. The symposium was organized by the Institute of Environmental and Technology Law of Trier University (IUTR) on the basis of a cooperation scheme with the Environmental Law Institute of the Johannes Kepler University Linz, Austria, and took place in Trier on 29-30 October 2015. It brought together a distinguished group of experts from Europe and abroad to address current issues of international and European environmental law. The main objective of the symposium was to take stock of the actors and principles of international and European environmental law, and to analyze how and to what extent these principles have been implemented on the supranational and domestic legal levels.

  18. Using the LANSCE irradiation facility to predict the number of fatal soft errors in one of the world's fastest supercomputers

    International Nuclear Information System (INIS)

    Michalak, S.E.; Harris, K.W.; Hengartner, N.W.; Takala, B.E.; Wender, S.A.

    2005-01-01

    Los Alamos National Laboratory (LANL) is home to the Los Alamos Neutron Science Center (LANSCE). LANSCE is a unique facility because its neutron spectrum closely mimics the neutron spectrum at terrestrial and aircraft altitudes, but is many times more intense. Thus, LANSCE provides an ideal setting for accelerated testing of semiconductor and other devices that are susceptible to cosmic ray induced neutrons. Many industrial companies use LANSCE to estimate device susceptibility to cosmic ray induced neutrons, and it has also been used to test parts from one of LANL's supercomputers, the ASC (Advanced Simulation and Computing Program) Q. This paper discusses our use of the LANSCE facility to study components in Q including a comparison with failure data from Q

  19. Transportation Energy Futures Series: Effects of the Built Environment on Transportation: Energy Use, Greenhouse Gas Emissions, and Other Factors

    Energy Technology Data Exchange (ETDEWEB)

    Porter, C. D.; Brown, A.; Dunphy, R. T.; Vimmerstedt, L.

    2013-03-01

    Planning initiatives in many regions and communities aim to reduce transportation energy use, decrease emissions, and achieve related environmental benefits by changing land use. This report reviews and summarizes findings from existing literature on the relationship between the built environment and transportation energy use and greenhouse gas emissions, identifying results trends as well as potential future actions. The indirect influence of federal transportation and housing policies, as well as the direct impact of municipal regulation on land use are examined for their effect on transportation patterns and energy use. Special attention is given to the 'four D' factors of density, diversity, design and accessibility. The report concludes that policy-driven changes to the built environment could reduce transportation energy and GHG emissions from less than 1% to as much as 10% by 2050, the equivalent of 16%-18% of present-day urban light-duty-vehicle travel. This is one of a series of reports produced as a result of the Transportation Energy Futures (TEF) project, a Department of Energy-sponsored multi-agency project initiated to pinpoint underexplored strategies for abating GHGs and reducing petroleum dependence related to transportation.

  20. Transportation Energy Futures Series. Effects of the Built Environment on Transportation. Energy Use, Greenhouse Gas Emissions, and Other Factors

    Energy Technology Data Exchange (ETDEWEB)

    Porter, C. D. [National Renewable Energy Lab. (NREL) and Cambridge Systematics, Inc., Golden, CO (United States); Brown, A. [National Renewable Energy Lab. (NREL) and Cambridge Systematics, Inc., Golden, CO (United States); Dunphy, R. T. [National Renewable Energy Lab. (NREL) and Cambridge Systematics, Inc., Golden, CO (United States); Vimmerstedt, L. [National Renewable Energy Lab. (NREL) and Cambridge Systematics, Inc., Golden, CO (United States)

    2013-03-15

    Planning initiatives in many regions and communities aim to reduce transportation energy use, decrease emissions, and achieve related environmental benefits by changing land use. This report reviews and summarizes findings from existing literature on the relationship between the built environment and transportation energy use and greenhouse gas emissions, identifying results trends as well as potential future actions. The indirect influence of federal transportation and housing policies, as well as the direct impact of municipal regulation on land use are examined for their effect on transportation patterns and energy use. Special attention is given to the 'four D' factors of density, diversity, design and accessibility. The report concludes that policy-driven changes to the built environment could reduce transportation energy and GHG emissions from less than 1% to as much as 10% by 2050, the equivalent of 16%-18% of present-day urban light-duty-vehicle travel. This is one of a series of reports produced as a result of the Transportation Energy Futures (TEF) project, a Department of Energy-sponsored multi-agency project initiated to pinpoint underexplored strategies for abating GHGs and reducing petroleum dependence related to transportation.

  1. Predicting multiprocessing efficiency on the Cray multiprocessors in a (CTSS) time-sharing environment/application to a 3-D magnetohydrodynamics code

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1988-01-01

    A formula is derived for predicting multiprocessing efficiency on Cray supercomputers equipped with the Cray Time-Sharing System (CTSS). The model is applicable to an intensive time-sharing environment. The actual efficiency estimate depends on three factors: the code size, task length, and job mix. The implementation of multitasking in a three-dimensional plasma magnetohydrodynamics (MHD) code, TEMCO, is discussed. TEMCO solves the primitive one-fluid compressible MHD equations and includes resistive and Hall effects in Ohm's law. Virtually all segments of the main time-integration loop are multitasked. The multiprocessing efficiency model is applied to TEMCO. Excellent agreement is obtained between the actual multiprocessing efficiency and the theoretical prediction

  2. Comparative assessment for future prediction of urban water environment using WEAP model: A case study of Kathmandu, Manila and Jakarta

    Science.gov (United States)

    Kumar, Pankaj; Yoshifumi, Masago; Ammar, Rafieiemam; Mishra, Binaya; Fukushi, Ken

    2017-04-01

    Uncontrolled release of pollutants, increasing extreme weather condition, rapid urbanization and poor governance posing a serious threat to sustainable water resource management in developing urban spaces. Considering half of the world's mega-cities are in the Asia and the Pacific with 1.7 billion people do not access to improved water and sanitation, water security through its proper management is both an increasing concern and an imperative critical need. This research work strives to give a brief glimpse about predicted future water environment in Bagmati, Pasig and Ciliwung rivers from three different cities viz. Manila, Kathmandu and Jakarta respectively. Hydrological model used here to foresee the collective impacts of rapid population growth because of urbanization as well as climate change on unmet demand and water quality in near future time by 2030. All three rivers are major source of water for different usage viz. domestic, industrial, agriculture and recreation but uncontrolled withdrawal and sewerage disposal causing deterioration of water environment in recent past. Water Evaluation and Planning (WEAP) model was used to model river water quality pollution future scenarios using four indicator species i.e. Dissolved Oxygen (DO), Biochemical Oxygen Demand (BOD), Chemical Oxygen Demand (COD) and Nitrate (NO3). Result for simulated water quality as well as unmet demand for year 2030 when compared with that of reference year clearly indicates that not only water quality deteriorates but also unmet demands is increasing in future course of time. This also suggests that current initiatives and policies for water resource management are not sufficient enough and hence immediate and inclusive action through transdisciplinary research.

  3. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  4. Visualization at supercomputing centers: the tale of little big iron and the three skinny guys.

    Science.gov (United States)

    Bethel, E W; van Rosendale, J; Southard, D; Gaither, K; Childs, H; Brugger, E; Ahern, S

    2011-01-01

    Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources-the "Big Iron." Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be-that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?

  5. Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments

    Directory of Open Access Journals (Sweden)

    Jose M. Moya

    2012-08-01

    Full Text Available Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  6. Ubiquitous green computing techniques for high demand applications in Smart environments.

    Science.gov (United States)

    Zapater, Marina; Sanchez, Cesar; Ayala, Jose L; Moya, Jose M; Risco-Martín, José L

    2012-01-01

    Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  7. Value of the future: Discounting in random environments

    Science.gov (United States)

    Farmer, J. Doyne; Geanakoplos, John; Masoliver, Jaume; Montero, Miquel; Perelló, Josep

    2015-05-01

    We analyze how to value future costs and benefits when they must be discounted relative to the present. We introduce the subject for the nonspecialist and take into account the randomness of the economic evolution by studying the discount function of three widely used processes for the dynamics of interest rates: Ornstein-Uhlenbeck, Feller, and log-normal. Besides obtaining exact expressions for the discount function and simple asymptotic approximations, we show that historical average interest rates overestimate long-run discount rates and that this effect can be large. In other words, long-run discount rates should be substantially less than the average rate observed in the past, otherwise any cost-benefit calculation would be biased in favor of the present and against interventions that may protect the future.

  8. Greening Internet of Things for Smart Everythings with A Green-Environment Life: A Survey and Future Prospects

    OpenAIRE

    Alsamhi, S. H.; Ma, Ou; Ansari, M. Samar; Meng, Qingliang

    2018-01-01

    Tremendous technology development in the field of Internet of Things (IoT) has changed the way we work and live. Although the numerous advantages of IoT are enriching our society, it should be reminded that the IoT also consumes energy, embraces toxic pollution and E-waste. These place new stress on the environments and smart world. In order to increase the benefits and reduce the harm of IoT, there is an increasing desire to move toward green IoT. Green IoT is seen as the future of IoT that ...

  9. The design and implementation of cost-effective algorithms for direct solution of banded linear systems on the vector processor system 32 supercomputer

    Science.gov (United States)

    Samba, A. S.

    1985-01-01

    The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.

  10. Radioactivity in the terrestrial environment; review of UK research 1993-1996 and recommendations for future work

    International Nuclear Information System (INIS)

    1997-03-01

    The national Radioactivity Research and Environmental Monitoring Committee (RADREM) provides a forum for liaison on UK research and monitoring in the radioactive substances and radioactive waste management fields. It is subscribed to by Government departments, national regulatory bodies, the UK nuclear industry and other bodies with relevant research sponsorship and monitoring interests. A key function of the RADREM committee is to ensure that there is no unnecessary overlap between or significant omission from the research sponsored by the organisations represented upon it. To this end periodic reviews of research sector programmes are carried out. This report covers a review which was carried out by the Terrestrial Environment Sub-Committee (TESC) of RADREM for the period 1993-1996. In particular possible future research requirements are considered and evaluated. Such omissions are as identified do not reflect Sub-Committee views on the adequacy of any individual organisations research programme. Rather they should be seen as areas where gaps in knowledge may exist, which all organisations are free to consider and prioritise in the formulation of their future research requirements. (author)

  11. Are Cloud Environments Ready for Scientific Applications?

    Science.gov (United States)

    Mehrotra, P.; Shackleford, K.

    2011-12-01

    Cloud computing environments are becoming widely available both in the commercial and government sectors. They provide flexibility to rapidly provision resources in order to meet dynamic and changing computational needs without the customers incurring capital expenses and/or requiring technical expertise. Clouds also provide reliable access to resources even though the end-user may not have in-house expertise for acquiring or operating such resources. Consolidation and pooling in a cloud environment allow organizations to achieve economies of scale in provisioning or procuring computing resources and services. Because of these and other benefits, many businesses and organizations are migrating their business applications (e.g., websites, social media, and business processes) to cloud environments-evidenced by the commercial success of offerings such as the Amazon EC2. In this paper, we focus on the feasibility of utilizing cloud environments for scientific workloads and workflows particularly of interest to NASA scientists and engineers. There is a wide spectrum of such technical computations. These applications range from small workstation-level computations to mid-range computing requiring small clusters to high-performance simulations requiring supercomputing systems with high bandwidth/low latency interconnects. Data-centric applications manage and manipulate large data sets such as satellite observational data and/or data previously produced by high-fidelity modeling and simulation computations. Most of the applications are run in batch mode with static resource requirements. However, there do exist situations that have dynamic demands, particularly ones with public-facing interfaces providing information to the general public, collaborators and partners, as well as to internal NASA users. In the last few months we have been studying the suitability of cloud environments for NASA's technical and scientific workloads. We have ported several applications to

  12. Sandia`s network for Supercomputing `94: Linking the Los Alamos, Lawrence Livermore, and Sandia National Laboratories using switched multimegabit data service

    Energy Technology Data Exchange (ETDEWEB)

    Vahle, M.O.; Gossage, S.A.; Brenkosh, J.P. [Sandia National Labs., Albuquerque, NM (United States). Advanced Networking Integration Dept.

    1995-01-01

    Supercomputing `94, a high-performance computing and communications conference, was held November 14th through 18th, 1994 in Washington DC. For the past four years, Sandia National Laboratories has used this conference to showcase and focus its communications and networking endeavors. At the 1994 conference, Sandia built a Switched Multimegabit Data Service (SMDS) network running at 44.736 megabits per second linking its private SMDS network between its facilities in Albuquerque, New Mexico and Livermore, California to the convention center in Washington, D.C. For the show, the network was also extended from Sandia, New Mexico to Los Alamos National Laboratory and from Sandia, California to Lawrence Livermore National Laboratory. This paper documents and describes this network and how it was used at the conference.

  13. Parallel processor programs in the Federal Government

    Science.gov (United States)

    Schneck, P. B.; Austin, D.; Squires, S. L.; Lehmann, J.; Mizell, D.; Wallgren, K.

    1985-01-01

    In 1982, a report dealing with the nation's research needs in high-speed computing called for increased access to supercomputing resources for the research community, research in computational mathematics, and increased research in the technology base needed for the next generation of supercomputers. Since that time a number of programs addressing future generations of computers, particularly parallel processors, have been started by U.S. government agencies. The present paper provides a description of the largest government programs in parallel processing. Established in fiscal year 1985 by the Institute for Defense Analyses for the National Security Agency, the Supercomputing Research Center will pursue research to advance the state of the art in supercomputing. Attention is also given to the DOE applied mathematical sciences research program, the NYU Ultracomputer project, the DARPA multiprocessor system architectures program, NSF research on multiprocessor systems, ONR activities in parallel computing, and NASA parallel processor projects.

  14. Review of the ASDEX upgrade data acquisition environment - present operation and future requirements

    International Nuclear Information System (INIS)

    Behler, K.; Blank, H.; Buhler, A.; Drube, R.; Friedrich, H.; Foerster, K.; Hallatschek, K.; Heimann, P.; Hertweck, F.; Maier, J.; Heimann, R.; Hertweck, F.; Maier, J.; Merkel, R.; Pacco-Duechs, M.-G.; Raupp, G.; Reuter, H.; Schneider-Maxon, U.; Tisma, R.; Zilker, M.

    1999-01-01

    The data acquisition environment of the ASDEX upgrade fusion experiment was designed in the late 1980s to handle a predicted quantity of 8 Mbytes fo data per discharge. After 7 years of operation a review of the whole data acquisition and analysis environment shows what remains of the original design ideas. Comparing the original 15 diagnostics with the present set of 250 diagnostic datasets generated per shot shows how the system has grown. Although now a vast accumulation of functional parts, the system still works in a stable manner and is maintainable. The underlying concepts affirming these qualities are modularity and compatibility. Modularity ensures that most parts of the system can be modified without affecting others. Standards for data structures and interfaces between components and methods are the prerequisites which make modularity work. The experience of the last few years shows that, besides the standards achieved, new, mainly real-time, features are needed: real-time event recognition allowing reaction to complex changing conditions; real-time wavelet analysis allowing adapted sampling rates; real-time data exchange between diagnostics and control; real-time networks allowing flexible computer coupling to permit interplay between different components; object-oriented programming concepts and databases are required for readily adaptable software modules. A final assessment of our present data processing situation and future requirements shows that modern information technology methods have to be applied more intensively to provide the most flexible means to improve the interaction of all components on a large fusion device. (orig.)

  15. Summary of multi-core hardware and programming model investigations

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Suzanne Marie; Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.

    2008-05-01

    This report summarizes our investigations into multi-core processors and programming models for parallel scientific applications. The motivation for this study was to better understand the landscape of multi-core hardware, future trends, and the implications on system software for capability supercomputers. The results of this study are being used as input into the design of a new open-source light-weight kernel operating system being targeted at future capability supercomputers made up of multi-core processors. A goal of this effort is to create an agile system that is able to adapt to and efficiently support whatever multi-core hardware and programming models gain acceptance by the community.

  16. Energy, society and environment. Technology for a sustainable future

    International Nuclear Information System (INIS)

    Elliott, D.

    1997-04-01

    Energy, Society and Environment examines energy and energy use, and the interactions between technology, society and the environment. The book is clearly structured to examine; Key environmental issues, and the harmful impacts of energy use; New technological solutions to environmental problems; Implementation of possible solutions, and Implications for society in developing a sustainable approach to energy use. Social processes and strategic solutions to problems are located within a clear, technological context with topical case studies. (UK)

  17. A Parallel Supercomputer Implementation of a Biological Inspired Neural Network and its use for Pattern Recognition

    International Nuclear Information System (INIS)

    De Ladurantaye, Vincent; Lavoie, Jean; Bergeron, Jocelyn; Parenteau, Maxime; Lu Huizhong; Pichevar, Ramin; Rouat, Jean

    2012-01-01

    A parallel implementation of a large spiking neural network is proposed and evaluated. The neural network implements the binding by synchrony process using the Oscillatory Dynamic Link Matcher (ODLM). Scalability, speed and performance are compared for 2 implementations: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) running on clusters of multicore supercomputers and NVIDIA graphical processing units respectively. A global spiking list that represents at each instant the state of the neural network is described. This list indexes each neuron that fires during the current simulation time so that the influence of their spikes are simultaneously processed on all computing units. Our implementation shows a good scalability for very large networks. A complex and large spiking neural network has been implemented in parallel with success, thus paving the road towards real-life applications based on networks of spiking neurons. MPI offers a better scalability than CUDA, while the CUDA implementation on a GeForce GTX 285 gives the best cost to performance ratio. When running the neural network on the GTX 285, the processing speed is comparable to the MPI implementation on RQCHP's Mammouth parallel with 64 notes (128 cores).

  18. Establishment of sustainable health science for future generations: from a hundred years ago to a hundred years in the future.

    Science.gov (United States)

    Mori, Chisato; Todaka, Emiko

    2009-01-01

    Recently, we have investigated the relationship between environment and health from a scientific perspective and developed a new academic field, "Sustainable Health Science" that will contribute to creating a healthy environment for future generations. There are three key points in Sustainable Heath Science. The first key point is "focusing on future generations"-society should improve the environment and prevent possible adverse health effects on future generations (Environmental Preventive Medicine). The second key point is the "precautious principle". The third key point is "transdisciplinary science", which means that not only medical science but also other scientific fields, such as architectural and engineering science, should be involved. Here, we introduce our recent challenging project "Chemiless Town Project", in which a model town is under construction with fewer chemicals. In the project, a trial of an education program and a health-examination system of chemical exposure is going to be conducted. In the future, we are aiming to establish health examination of exposure to chemicals of women of reproductive age so that the risk of adverse health effects to future generations will decrease and they can enjoy a better quality of life. We hope that society will accept the importance of forming a sustainable society for future generations not only with regard to chemicals but also to the whole surrounding environment. As the proverb of American native people tells us, we should live considering the effects on seven generations in the future.

  19. Easy Access to HPC Resources through the Application GUI

    KAUST Repository

    van Waveren, Matthijs

    2016-11-01

    The computing environment at the King Abdullah University of Science and Technology (KAUST) is growing in size and complexity. KAUST hosts the tenth fastest supercomputer in the world (Shaheen II) and several HPC clusters. Researchers can be inhibited by the complexity, as they need to learn new languages and execute many tasks in order to access the HPC clusters and the supercomputer. In order to simplify the access, we have developed an interface between the applications and the clusters and supercomputer that automates the transfer of input data and job submission and also the retrieval of results to the researcher’s local workstation. The innovation is that the user now submits his jobs from within the application GUI on his workstation, and does not have to directly log into the clusters or supercomputer anymore. This article details the solution and its benefits to the researchers.

  20. Creating Food Futures. Trade, Ethics and the Environment

    NARCIS (Netherlands)

    Farnworth, C.R.; Jiggins, J.L.S.; Thomas, E.V.

    2008-01-01

    A global transformation in food supply and consumption is placing our food security at risk. What changes need to be made to the ways we trade, process and purchase our food if everyone in the world is going to have enough wholesome food to eat? Is there genuine scope for creating food futures that

  1. Exascale Data Analysis

    CERN Multimedia

    CERN. Geneva; Fitch, Blake

    2011-01-01

    Traditionaly, the primary role of supercomputers was to create data, primarily for simulation applications. Due to usage and technology trends, supercomputers are increasingly also used for data analysis. Some of this data is from simulations, but there is also a rapidly increasingly amount of real-world science and business data to be analyzed. We briefly overview Blue Gene and other current supercomputer architectures. We outline future architectures, up to the Exascale supercomputers expected in the 2020 time frame. We focus on the data analysis challenges and opportunites, especially those concerning Flash and other up-and-coming storage class memory. About the speakers Blake G. Fitch has been with IBM Research, Yorktown Heights, NY since 1987, mainly pursuing interests in parallel systems. He joined the Scalable Parallel Systems Group in 1990, contributing to research and development that culminated in the IBM scalable parallel system (SP*) product. His research interests have focused on applicatio...

  2. The Future of Nonproliferation in a Changed and Changing Environment: A Workshop Summary

    International Nuclear Information System (INIS)

    Dreicer, M.

    2016-01-01

    The Center for Global Security Research and Global Security Principal Directorate at Lawrence Livermore National Laboratory convened a workshop in July 2016 to consider ''The Future of Nonproliferation in a Changed and Changing Security Environment.'' We took a broad view of nonproliferation, encompassing not just the treaty regime but also arms control, threat reduction, counter-roliferation, and countering nuclear terrorism. We gathered a group of approximately 60 experts from the technical, academic, political, defense and think tank communities and asked them what and how much can reasonably be accomplished in each of these areas in the 5 to 10 years ahead. Discussion was on a not-for-attribution basis. This document provides a summary of key insights and lessons learned, and is provided to help stimulate broader public discussion of these issues. It is a collection of ideas as informally discussed and debated among a group of experts. The ideas reported here are the personal views of individual experts and should not be attributed to Lawrence Livermore National Laboratory.

  3. The Future of Nonproliferation in a Changed and Changing Environment: A Workshop Summary

    Energy Technology Data Exchange (ETDEWEB)

    Dreicer, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-08-30

    The Center for Global Security Research and Global Security Principal Directorate at Lawrence Livermore National Laboratory convened a workshop in July 2016 to consider “The Future of Nonproliferation in a Changed and Changing Security Environment.” We took a broad view of nonproliferation, encompassing not just the treaty regime but also arms control, threat reduction, counter-­proliferation, and countering nuclear terrorism. We gathered a group of approximately 60 experts from the technical, academic, political, defense and think tank communities and asked them what—and how much—can reasonably be accomplished in each of these areas in the 5 to 10 years ahead. Discussion was on a not-­for-­attribution basis. This document provides a summary of key insights and lessons learned, and is provided to help stimulate broader public discussion of these issues. It is a collection of ideas as informally discussed and debated among a group of experts. The ideas reported here are the personal views of individual experts and should not be attributed to Lawrence Livermore National Laboratory.

  4. The Future of the Global Environment: A Model-based Analysis Supporting UNEP's First Global Environment Outlook

    NARCIS (Netherlands)

    Bakkes JA; Woerden JW van; Alcamo J; Berk MM; Bol P; Born GJ van den; Brink BJE ten; Hettelingh JP; Langeweg F; Niessen LW; Swart RJ; United Nations Environment; MNV

    1997-01-01

    This report documents the scenario analysis in UNEP's first Global Environment Outlook, published at the same time as the scenario analysis. This Outlook provides a pilot assessment of developments in the environment, both global and regional, between now and 2015, with a further projection to

  5. Heavy-tailed distribution of the SSH Brute-force attack duration in a multi-user environment

    Science.gov (United States)

    Lee, Jae-Kook; Kim, Sung-Jun; Park, Chan Yeol; Hong, Taeyoung; Chae, Huiseung

    2016-07-01

    Quite a number of cyber-attacks to be place against supercomputers that provide highperformance computing (HPC) services to public researcher. Particularly, although the secure shell protocol (SSH) brute-force attack is one of the traditional attack methods, it is still being used. Because stealth attacks that feign regular access may occur, they are even harder to detect. In this paper, we introduce methods to detect SSH brute-force attacks by analyzing the server's unsuccessful access logs and the firewall's drop events in a multi-user environment. Then, we analyze the durations of the SSH brute-force attacks that are detected by applying these methods. The results of an analysis of about 10 thousands attack source IP addresses show that the behaviors of abnormal users using SSH brute-force attacks are based on human dynamic characteristics of a typical heavy-tailed distribution.

  6. Nuclear Futures Analysis and Scenario Building

    International Nuclear Information System (INIS)

    Arthur, E.D.; Beller, D.; Canavan, G.H.; Krakowski, R.A.; Peterson, P.; Wagner, R.L.

    1999-01-01

    This LDRD project created and used advanced analysis capabilities to postulate scenarios and identify issues, externalities, and technologies associated with future ''things nuclear''. ''Things nuclear'' include areas pertaining to nuclear weapons, nuclear materials, and nuclear energy, examined in the context of future domestic and international environments. Analysis tools development included adaptation and expansion of energy, environmental, and economics (E3) models to incorporate a robust description of the nuclear fuel cycle (both current and future technology pathways), creation of a beginning proliferation risk model (coupled to the (E3) model), and extension of traditional first strike stability models to conditions expected to exist in the future (smaller force sizes, multipolar engagement environments, inclusion of actual and latent nuclear weapons (capability)). Accomplishments include scenario development for regional and global nuclear energy, the creation of a beginning nuclear architecture designed to improve the proliferation resistance and environmental performance of the nuclear fuel cycle, and numerous results for future nuclear weapons scenarios

  7. Gone in eight seconds: Canadian data-transfer record points to the future of the Internet

    CERN Multimedia

    Tam, P

    2002-01-01

    "When completed in 2007, a new grid network will harness the processing power of many machines across Canada to create a communal supercomputer. It will be tailor-made for researchers with high-performance computing needs" (1 page).

  8. Energy and environment

    International Nuclear Information System (INIS)

    Barrere, M.

    1978-01-01

    Energy problems will play a fundamental role in the near future and researchers, engineers, economists and ecologists must work together to increase existing non-fossil energy sources and to develop new sources or techniques using less energy without pollution of the environment. Four aspects of future activities in this field are considered. First, energy sources, ie solar, fossil, nuclear, geothermal, and others such as wind energy or wave energy are considered in relation to the environment. Secondly the use of these sources by industry and by transportation, domestic, and agricultural sectors are examined. The problem of energy conservation in all fields is then considered. Finally the overall optimisation is analysed. This is the search for a compromise between the cost of usable energy and that of a degradation function taking into account the effect on the environment. (U.K.)

  9. Cyber warfare and electronic warfare integration in the operational environment of the future: cyber electronic warfare

    Science.gov (United States)

    Askin, Osman; Irmak, Riza; Avsever, Mustafa

    2015-05-01

    For the states with advanced technology, effective use of electronic warfare and cyber warfare will be the main determining factor of winning a war in the future's operational environment. The developed states will be able to finalize the struggles they have entered with a minimum of human casualties and minimum cost thanks to high-tech. Considering the increasing number of world economic problems, the development of human rights and humanitarian law it is easy to understand the importance of minimum cost and minimum loss of human. In this paper, cyber warfare and electronic warfare concepts are examined in conjunction with the historical development and the relationship between them is explained. Finally, assessments were carried out about the use of cyber electronic warfare in the coming years.

  10. Challenges in scaling NLO generators to leadership computers

    Science.gov (United States)

    Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.

    2017-10-01

    Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.

  11. Superconductivity and the environment: a Roadmap

    International Nuclear Information System (INIS)

    Nishijima, Shigehiro; Eckroad, Steven; Marian, Adela; Choi, Kyeongdal; Kim, Woo Seok; Terai, Motoaki; Deng, Zigang; Zheng, Jun; Wang, Jiasu; Umemoto, Katsuya; Du, Jia; Keenan, Shane; Foley, Cathy P; Febvre, Pascal; Mukhanov, Oleg; Cooley, Lance D; Hassenzahl, William V; Izumi, Mitsuru

    2013-01-01

    disasters will be helped by future supercomputer technologies that support huge amounts of data and sophisticated modeling, and with the aid of superconductivity these systems might not require the energy of a large city. We present different sections on applications that could address (or are addressing) a range of environmental issues. The Roadmap covers water purification, power distribution and storage, low-environmental impact transport, environmental sensing (particularly for the removal of unexploded munitions), monitoring the Earth’s magnetic fields for earthquakes and major solar activity, and, finally, developing a petaflop supercomputer that only requires 3% of the current supercomputer power provision while being 50 times faster. Access to fresh water. With only 2.5% of the water on Earth being fresh and climate change modeling forecasting that many areas will become drier, the ability to recycle water and achieve compact water recycling systems for sewage or ground water treatment is critical. The first section (by Nishijima) points to the potential of superconducting magnetic separation to enable water recycling and reuse. Energy. The Equinox Summit held in Waterloo Canada 2011 (2011 Equinox Summit: Energy 2030 http://wgsi.org/publications-resources) identified electricity use as humanity’s largest contributor to greenhouse gas emissions. Our appetite for electricity is growing faster than for any other form of energy. The communiqué from the summit said ‘Transforming the ways we generate, distribute and store electricity is among the most pressing challenges facing society today…. If we want to stabilize CO 2 levels in our atmosphere at 550 parts per million, all of that growth needs to be met by non-carbon forms of energy’ (2011 Equinox Summit: Energy 2030 http://wgsi.org/publications-resources). Superconducting technologies can provide the energy efficiencies to achieve, in the European Union alone, 33–65% of the required reduction in

  12. Superconductivity and the environment: a Roadmap

    Science.gov (United States)

    Nishijima, Shigehiro; Eckroad, Steven; Marian, Adela; Choi, Kyeongdal; Kim, Woo Seok; Terai, Motoaki; Deng, Zigang; Zheng, Jun; Wang, Jiasu; Umemoto, Katsuya; Du, Jia; Febvre, Pascal; Keenan, Shane; Mukhanov, Oleg; Cooley, Lance D.; Foley, Cathy P.; Hassenzahl, William V.; Izumi, Mitsuru

    2013-11-01

    disasters will be helped by future supercomputer technologies that support huge amounts of data and sophisticated modeling, and with the aid of superconductivity these systems might not require the energy of a large city. We present different sections on applications that could address (or are addressing) a range of environmental issues. The Roadmap covers water purification, power distribution and storage, low-environmental impact transport, environmental sensing (particularly for the removal of unexploded munitions), monitoring the Earth’s magnetic fields for earthquakes and major solar activity, and, finally, developing a petaflop supercomputer that only requires 3% of the current supercomputer power provision while being 50 times faster. Access to fresh water. With only 2.5% of the water on Earth being fresh and climate change modeling forecasting that many areas will become drier, the ability to recycle water and achieve compact water recycling systems for sewage or ground water treatment is critical. The first section (by Nishijima) points to the potential of superconducting magnetic separation to enable water recycling and reuse. Energy. The Equinox Summit held in Waterloo Canada 2011 (2011 Equinox Summit: Energy 2030 http://wgsi.org/publications-resources) identified electricity use as humanity’s largest contributor to greenhouse gas emissions. Our appetite for electricity is growing faster than for any other form of energy. The communiqué from the summit said ‘Transforming the ways we generate, distribute and store electricity is among the most pressing challenges facing society today…. If we want to stabilize CO2 levels in our atmosphere at 550 parts per million, all of that growth needs to be met by non-carbon forms of energy’ (2011 Equinox Summit: Energy 2030 http://wgsi.org/publications-resources). Superconducting technologies can provide the energy efficiencies to achieve, in the European Union alone, 33-65% of the required reduction in greenhouse

  13. Healthy and sustainable diets: Community concern about the effect of the future food environments and support for government regulating sustainable food supplies in Western Australia.

    Science.gov (United States)

    Harray, Amelia J; Meng, Xingqiong; Kerr, Deborah A; Pollard, Christina M

    2018-06-01

    To determine the level of community concern about future food supplies and perception of the importance placed on government regulation over the supply of environmentally friendly food and identify dietary and other factors associated with these beliefs in Western Australia. Data from the 2009 and 2012 Nutrition Monitoring Survey Series computer-assisted telephone interviews were pooled. Level of concern about the effect of the environment on future food supplies and importance of government regulating the supply of environmentally friendly food were measured. Multivariate regression analysed potential associations with sociodemographic variables, dietary health consciousness, weight status and self-reported intake of eight foods consistent with a sustainable diet. Western Australia. Community-dwelling adults aged 18-64 years (n = 2832). Seventy nine per cent of Western Australians were 'quite' or 'very' concerned about the effect of the environment on future food supplies. Respondents who paid less attention to the health aspects of their diet were less likely than those who were health conscious ('quite' or 'very' concerned) (OR = 0.53, 95% CI [0.35, 0.8] and 0.38 [0.17, 0.81] respectively). The majority of respondents (85.3%) thought it was 'quite' or 'very' important that government had regulatory control over an environmentally friendly food supply. Females were more likely than males to rate regulatory control as 'quite' or 'very' important' (OR = 1.63, 95% CI [1.09, 2.44], p = .02). Multiple regression modeling found that no other factors predicted concern or importance. There is a high level of community concern about the impact of the environment on future food supplies and most people believe it is important that the government regulates the issue. These attitudes dominate regardless of sociodemographic characteristics, weight status or sustainable dietary behaviours. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Decadal analysis of impact of future climate on wheat production in dry Mediterranean environment: A case of Jordan.

    Science.gov (United States)

    Dixit, Prakash N; Telleria, Roberto; Al Khatib, Amal N; Allouzi, Siham F

    2018-01-01

    Different aspects of climate change, such as increased temperature, changed rainfall and higher atmospheric CO 2 concentration, all have different effects on crop yields. Process-based crop models are the most widely used tools for estimating future crop yield responses to climate change. We applied APSIM crop simulation model in a dry Mediterranean climate with Jordan as sentinel site to assess impact of climate change on wheat production at decadal level considering two climate change scenarios of representative concentration pathways (RCP) viz., RCP4.5 and RCP8.5. Impact of climatic variables alone was negative on grain yield but this adverse effect was negated when elevated atmospheric CO 2 concentrations were also considered in the simulations. Crop cycle of wheat was reduced by a fortnight for RCP4.5 scenario and by a month for RCP8.5 scenario at the approach of end of the century. On an average, a grain yield increase of 5 to 11% in near future i.e., 2010s-2030s decades, 12 to 16% in mid future i.e., 2040s-2060s decades and 9 to 16% in end of century period can be expected for moderate climate change scenario (RCP4.5) and 6 to 15% in near future, 13 to 19% in mid future and 7 to 20% increase in end of century period for a drastic climate change scenario (RCP8.5) based on different soils. Positive impact of elevated CO 2 is more pronounced in soils with lower water holding capacity with moderate increase in temperatures. Elevated CO 2 had greater positive effect on transpiration use efficiency (TUE) than negative effect of elevated mean temperatures. The change in TUE was in near perfect direct relationship with elevated CO 2 levels (R 2 >0.99) and every 100-ppm atmospheric CO 2 increase resulted in TUE increase by 2kgha -1 mm -1 . Thereby, in this environment yield gains are expected in future and farmers can benefit from growing wheat. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Effects of the Extraterrestrial Environment on Plants: Recommendations for Future Space Experiments for the MELiSSA Higher Plant Compartment

    Directory of Open Access Journals (Sweden)

    Silje A. Wolff

    2014-05-01

    Full Text Available Due to logistical challenges, long-term human space exploration missions require a life support system capable of regenerating all the essentials for survival. Higher plants can be utilized to provide a continuous supply of fresh food, atmosphere revitalization, and clean water for humans. Plants can adapt to extreme environments on Earth, and model plants have been shown to grow and develop through a full life cycle in microgravity. However, more knowledge about the long term effects of the extraterrestrial environment on plant growth and development is necessary. The European Space Agency (ESA has developed the Micro-Ecological Life Support System Alternative (MELiSSA program to develop a closed regenerative life support system, based on micro-organisms and higher plant processes, with continuous recycling of resources. In this context, a literature review to analyze the impact of the space environments on higher plants, with focus on gravity levels, magnetic fields and radiation, has been performed. This communication presents a roadmap giving directions for future scientific activities within space plant cultivation. The roadmap aims to identify the research activities required before higher plants can be included in regenerative life support systems in space.

  16. Performance analysis of job scheduling policies in parallel supercomputing environments

    Energy Technology Data Exchange (ETDEWEB)

    Naik, V.K.; Squillante, M.S. [IBM T.J. Watson Research Center, Yorktown Heights, NY (United States); Setia, S.K. [George Mason Univ., Fairfax, VA (United States). Dept. of Computer Science

    1993-12-31

    In this paper the authors analyze three general classes of scheduling policies under a workload typical of largescale scientific computing. These policies differ in the manner in which processors are partitioned among the jobs as well as the way in which jobs are prioritized for execution on the partitions. Their results indicate that existing static schemes do not perform well under varying workloads. Adaptive policies tend to make better scheduling decisions, but their ability to adjust to workload changes is limited. Dynamic partitioning policies, on the other hand, yield the best performance and can be tuned to provide desired performance differences among jobs with varying resource demands.

  17. Current state and future direction of computer systems at NASA Langley Research Center

    Science.gov (United States)

    Rogers, James L. (Editor); Tucker, Jerry H. (Editor)

    1992-01-01

    Computer systems have advanced at a rate unmatched by any other area of technology. As performance has dramatically increased there has been an equally dramatic reduction in cost. This constant cost performance improvement has precipitated the pervasiveness of computer systems into virtually all areas of technology. This improvement is due primarily to advances in microelectronics. Most people are now convinced that the new generation of supercomputers will be built using a large number (possibly thousands) of high performance microprocessors. Although the spectacular improvements in computer systems have come about because of these hardware advances, there has also been a steady improvement in software techniques. In an effort to understand how these hardware and software advances will effect research at NASA LaRC, the Computer Systems Technical Committee drafted this white paper to examine the current state and possible future directions of computer systems at the Center. This paper discusses selected important areas of computer systems including real-time systems, embedded systems, high performance computing, distributed computing networks, data acquisition systems, artificial intelligence, and visualization.

  18. 'Create the future': an environment for excellence in teaching future-oriented Industrial Design Engineering

    NARCIS (Netherlands)

    Eger, Arthur O.; Lutters, Diederick; van Houten, Frederikus J.A.M.

    2004-01-01

    In 2001, the University of Twente started a new course on Industrial Design Engineering. This paper describes the insights that have been employed in developing the curriculum, and in developing the environment in which the educational activities are facilitated. The University of Twente has a broad

  19. Virtual laboratory for fusion research in Japan

    International Nuclear Information System (INIS)

    Tsuda, K.; Nagayama, Y.; Yamamoto, T.; Horiuchi, R.; Ishiguro, S.; Takami, S.

    2008-01-01

    A virtual laboratory system for nuclear fusion research in Japan has been developed using SuperSINET, which is a super high-speed network operated by National Institute of Informatics. Sixteen sites including major Japanese universities, Japan Atomic Energy Agency and National Institute for Fusion Science (NIFS) are mutually connected to SuperSINET with the speed of 1 Gbps by the end of 2006 fiscal year. Collaboration categories in this virtual laboratory are as follows: the large helical device (LHD) remote participation; the remote use of supercomputer system; and the all Japan ST (Spherical Tokamak) research program. This virtual laboratory is a closed network system, and is connected to the Internet through the NIFS firewall in order to keep higher security. Collaborators in a remote station can control their diagnostic devices at LHD and analyze the LHD data as they were at the LHD control room. Researchers in a remote station can use the supercomputer of NIFS in the same environment as NIFS. In this paper, we will describe detail of technologies and the present status of the virtual laboratory. Furthermore, the items that should be developed in the near future are also described

  20. Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-01-01

    Earthquakes are one of the most destructive natural hazards on our planet Earth. Hugh earthquakes striking offshore may cause devastating tsunamis, as evidenced by the 11 March 2011 Japan (moment magnitude Mw9.0) and the 26 December 2004 Sumatra (Mw9.1) earthquakes. Earthquake prediction (in terms of the precise time, place, and magnitude of a coming earthquake) is arguably unfeasible in the foreseeable future. To mitigate seismic hazards from future earthquakes in earthquake-prone areas, such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular, ground motion simulations for past and future (possible) significant earthquakes have been performed to understand factors that affect ground shaking in populated areas, and to provide ground shaking characteristics and synthetic seismograms for emergency preparation and design of earthquake-resistant structures. These simulation results can guide the development of more rational seismic provisions for leading to safer, more efficient, and economical50pt]Please provide V. Taylor author e-mail ID. structures in earthquake-prone regions.

  1. The computational future for climate and Earth system models: on the path to petaflop and beyond.

    Science.gov (United States)

    Washington, Warren M; Buja, Lawrence; Craig, Anthony

    2009-03-13

    The development of the climate and Earth system models has had a long history, starting with the building of individual atmospheric, ocean, sea ice, land vegetation, biogeochemical, glacial and ecological model components. The early researchers were much aware of the long-term goal of building the Earth system models that would go beyond what is usually included in the climate models by adding interactive biogeochemical interactions. In the early days, the progress was limited by computer capability, as well as by our knowledge of the physical and chemical processes. Over the last few decades, there has been much improved knowledge, better observations for validation and more powerful supercomputer systems that are increasingly meeting the new challenges of comprehensive models. Some of the climate model history will be presented, along with some of the successes and difficulties encountered with present-day supercomputer systems.

  2. Computer aided architectural design : futures 2001

    NARCIS (Netherlands)

    Vries, de B.; Leeuwen, van J.P.; Achten, H.H.

    2001-01-01

    CAAD Futures is a bi-annual conference that aims to promote the advancement of computer-aided architectural design in the service of those concerned with the quality of the built environment. The conferences are organized under the auspices of the CAAD Futures Foundation, which has its secretariat

  3. The Future of the Global Environment: A Model-based Analysis Supporting UNEP's First Global Environment Outlook

    OpenAIRE

    Bakkes JA; Woerden JW van; Alcamo J; Berk MM; Bol P; Born GJ van den; Brink BJE ten; Hettelingh JP; Langeweg F; Niessen LW; Swart RJ; United Nations Environment Programme (UNEP), Nairobi, Kenia; MNV

    1997-01-01

    This report documents the scenario analysis in UNEP's first Global Environment Outlook, published at the same time as the scenario analysis. This Outlook provides a pilot assessment of developments in the environment, both global and regional, between now and 2015, with a further projection to 2050. The study was carried out in support of the Agenda 21 interim evaluation, five years after 'Rio' and ten years after 'Brundtland'. The scenario analysis is based on only one scenario, Conventional...

  4. Innovative classification of methods of the Future-oriented Technology Analysis

    OpenAIRE

    HALICKA, Katarzyna

    2016-01-01

    In the era characterized by significant dynamics of the environment traditional methods of anticipating the future, assuming the immutability of the factors affecting the forecasted phenomenon, may be in the long term ineffective. The modern approach of predicting the future of technology, taking into account the multidimensionality of the environment, is, among other things, the Future-Oriented Technology Analysis (FTA). Designing the FTA research procedure is a complex process, both in orga...

  5. The operating room of the future: observations and commentary.

    Science.gov (United States)

    Satava, Richard M

    2003-09-01

    The Operating Room of the Future is a construct upon which to develop the next generation of operating environments for the patient, surgeon, and operating team. Analysis of the suite of visions for the Operating Room of the Future reveals a broad set of goals, with a clear overall solution to create a safe environment for high-quality healthcare. The vision, although planned for the future, is based upon iteratively improving and integrating current systems, both technology and process. This must become the Operating Room of Today, which will require the enormous efforts described. An alternative future of the operating room, based upon emergence of disruptive technologies, is also presented.

  6. Cloud based spectrum manager for future wireless regulatory environment

    CSIR Research Space (South Africa)

    Masonta, MT

    2015-12-01

    Full Text Available The regulatory environment in radio frequency spectrum management lags the advancement of wireless technologies, especially in the area of cognitive radio and dynamic spectrum access. In this paper we argue that the solution towards spectrum Pareto...

  7. ASPECTS OF THE MANAGER ACTIVITIES WITHIN THE FUTURE COMPETITIVE ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    GHEORGHE FLORIN BUŞE

    2012-01-01

    Full Text Available The first decade of this century was unsettled regarding the management concepts and instruments. From the points of view of the total quality projects, product development time, product power, adapting management, behaviors and values, teams, networks and alliances, this incertitude represents a permanent seek of the ways to deal with the significant competitive discontinuities. Although every initiative may contain important elements which go through the essence of things, until now there was no consensus related to the managerial changing nature. The sole conclusion after these studies is that the managerial work will be different in the future. This paper underlines the most important competitive discontinuities and draws a model of the future managerial work.

  8. Parliamentarians and environment

    International Nuclear Information System (INIS)

    Boy, D.

    2004-01-01

    The data presented in this report come from an inquiry carried out by Sofres between March 5 and April 23, 2003, with a sample of 200 parliamentarians (122 deputies and 78 senators) who explained their attitude with respect to the question of environment. The questionnaire comprises 5 main dimensions dealing with: the relative importance of the environment stake, the attitudes with respect to past, present and future environment policies, the attitude with respect to specific stakes (energy, wastes), the attitude with respect to some problems of conservation of the natural heritage, and the attitude with respect to the participation of the public to some environment-related decisions. (J.S.)

  9. Use of QUADRICS supercomputer as embedded simulator in emergency management systems; Utilizzo del calcolatore QUADRICS come simulatore in linea in un sistema di gestione delle emergenze

    Energy Technology Data Exchange (ETDEWEB)

    Bove, R.; Di Costanzo, G.; Ziparo, A. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dip. Energia

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system.

  10. Toward sustainable energy futures

    Energy Technology Data Exchange (ETDEWEB)

    Pasztor, J. (United Nations Environment Programme, Nairobi (Kenya))

    1990-01-01

    All energy systems have adverse as well as beneficial impacts on the environment. They vary in quality, quantity, in time and in space. Environmentally sensitive energy management tries to minimize the adverse impacts in an equitable manner between different groups in the most cost-effective ways. Many of the enviornmental impacts of energy continue to be externalized. Consequently, these energy systems which can externalize their impacts more easily are favoured, while others remain relatively expensive. The lack of full integration of environmental factors into energy policy and planning is the overriding problem to be resolved before a transition towards sustainable energy futures can take place. The most pressing problem in the developing countries relates to the unsustainable and inefficient use of biomass resources, while in the industrialized countries, the major energy-environment problems arise out of the continued intensive use of fossil fuel resources. Both of these resource issues have their role to play in climate change. Although there has been considerable improvement in pollution control in a number of situations, most of the adverse impacts will undoubtedly increase in the future. Population growth will lead to increased demand, and there will also be greater use of lower grade fuels. Climate change and the crisis in the biomass resource base in the developing countries are the most critical energy-environment issues to be resolved in the immediate future. In both cases, international cooperation is an essential requirement for successful resolution. 26 refs.

  11. Past successes and future challenges: Improving the urban environment

    Energy Technology Data Exchange (ETDEWEB)

    Gade, M.

    1994-12-31

    The author discusses issues related to the Chicago urban environment from her perspective in the Illinois Environmental Protection Agency. Understanding of the ozone air pollution problem in the Chicago area has undergone significant changes in the past three years, and there is still more to be understood about the complex factors which contribute to ozone pollution over urban areas such as Chicago. Ability to address these problems to present clean air standards is not in hand at present. The author asserts that information, and the ability of governmental agencies to ingest and respond to that information in a timely manner is a key to improvement of the environment in urban areas in reasonable time spans. In addition cost and price information on environmental control and protection needs to be more clearly presented to the people so they can understand the difficult choices which must be made in addressing these environmental problems.

  12. Past successes and future challenges: Improving the urban environment

    International Nuclear Information System (INIS)

    Gade, M.

    1994-01-01

    The author discusses issues related to the Chicago urban environment from her perspective in the Illinois Environmental Protection Agency. Understanding of the ozone air pollution problem in the Chicago area has undergone significant changes in the past three years, and there is still more to be understood about the complex factors which contribute to ozone pollution over urban areas such as Chicago. Ability to address these problems to present clean air standards is not in hand at present. The author asserts that information, and the ability of governmental agencies to ingest and respond to that information in a timely manner is a key to improvement of the environment in urban areas in reasonable time spans. In addition cost and price information on environmental control and protection needs to be more clearly presented to the people so they can understand the difficult choices which must be made in addressing these environmental problems

  13. Magnetic fusion energy and computers. The role of computing in magnetic fusion energy research and development (second edition)

    International Nuclear Information System (INIS)

    1983-01-01

    This report documents the structure and uses of the MFE Network and presents a compilation of future computing requirements. Its primary emphasis is on the role of supercomputers in fusion research. One of its key findings is that with the introduction of each successive class of supercomputer, qualitatively improved understanding of fusion processes has been gained. At the same time, even the current Class VI machines severely limit the attainable realism of computer models. Many important problems will require the introduction of Class VII or even larger machines before they can be successfully attacked

  14. Summaries of research and development activities by using supercomputer system of JAEA in FY2015. April 1, 2015 - March 31, 2016

    International Nuclear Information System (INIS)

    2017-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As shown in the fact that about 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology. In FY2015, the system was used for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue, as well as for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great number of R and D results accomplished by using the system in FY2015, as well as user support, operational records and overviews of the system, and so on. (author)

  15. Summaries of research and development activities by using supercomputer system of JAEA in FY2014. April 1, 2014 - March 31, 2015

    International Nuclear Information System (INIS)

    2016-02-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As shown in the fact that about 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology. In FY2014, the system was used for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue, as well as for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great number of R and D results accomplished by using the system in FY2014, as well as user support, operational records and overviews of the system, and so on. (author)

  16. Summaries of research and development activities by using supercomputer system of JAEA in FY2013. April 1, 2013 - March 31, 2014

    International Nuclear Information System (INIS)

    2015-02-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. About 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2013, the system was used not only for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science, but also for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue. This report presents a great amount of R and D results accomplished by using the system in FY2013, as well as user support, operational records and overviews of the system, and so on. (author)

  17. Summaries of research and development activities by using supercomputer system of JAEA in FY2012. April 1, 2012 - March 31, 2013

    International Nuclear Information System (INIS)

    2014-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As more than 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2012, the system was used not only for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science, but also for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as apriority issue. This report presents a great amount of R and D results accomplished by using the system in FY2012, as well as user support, operational records and overviews of the system, and so on. (author)

  18. Summaries of research and development activities by using supercomputer system of JAEA in FY2011. April 1, 2011 - March 31, 2012

    International Nuclear Information System (INIS)

    2013-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As more than 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2011, the system was used for analyses of the accident at the Fukushima Daiichi Nuclear Power Station and establishment of radioactive decontamination plan, as well as the JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great amount of R and D results accomplished by using the system in FY2011, as well as user support structure, operational records and overviews of the system, and so on. (author)

  19. Monitoring the Environments We Depend On

    International Nuclear Information System (INIS)

    Madsen, Michael

    2013-01-01

    Our overuse of natural resources, pollution and climate change are weakening natural systems’ ability to adapt to ever more sources of stress. The varied environments of our planet are interconnected and the pollution of one has ramifications across all. It is thus important to monitor the health of our environment to ensure a sustainable future. The IAEA, through its Environment Laboratories, Water Resource Programme, and technical cooperation programme, applies unique, versatile and cost-effective isotopic and nuclear techniques to understand many of the key environmental mechanisms needed to ensure a sustainable future. These monitoring systems help Member States make ecologically-responsible and scientifically-grounded development decisions

  20. The QCDOC Project

    International Nuclear Information System (INIS)

    Boyle, P.; Chen, D.; Christ, N.; Clark, M.; Cohen, S.; Cristian, C.; Dong, Z.; Gara, A.; Joo, B.; Jung, C.; Kim, C.; Levkova, L.; Liao, X.; Liu, G.; Li, S.; Lin, H.; Mawhinney, R.; Ohta, S.; Petrov, K.; Wettig, T.; Yamaguchi, A.

    2005-01-01

    The QCDOC project has developed a supercomputer optimised for the needs of Lattice QCD simulations. It provides a very competitive price to sustained performance ratio of around $1 USD per sustained Megaflop/s in combination with outstanding scalability. Thus very large systems delivering over 5 TFlop/s of performance on the evolution of a single lattice is possible. Large prototypes have been built and are functioning correctly. The software environment raises the state of the art in such custom supercomputers. It is based on a lean custom node operating system that eliminates many unnecessary overheads that plague other systems. Despite the custom nature, the operating system implements a standards compliant UNIX-like programming environment easing the porting of software from other systems. The SciDAC QMP interface adds internode communication in a fashion that provides a uniform cross-platform programming environment

  1. Getting To Exascale: Applying Novel Parallel Programming Models To Lab Applications For The Next Generation Of Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Dube, Evi [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shereda, Charles [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nau, Lee [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Harris, Lance [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2010-09-27

    As supercomputing moves toward exascale, node architectures will change significantly. CPU core counts on nodes will increase by an order of magnitude or more. Heterogeneous architectures will become more commonplace, with GPUs or FPGAs providing additional computational power. Novel programming models may make better use of on-node parallelism in these new architectures than do current models. In this paper we examine several of these novel models – UPC, CUDA, and OpenCL –to determine their suitability to LLNL scientific application codes. Our study consisted of several phases: We conducted interviews with code teams and selected two codes to port; We learned how to program in the new models and ported the codes; We debugged and tuned the ported applications; We measured results, and documented our findings. We conclude that UPC is a challenge for porting code, Berkeley UPC is not very robust, and UPC is not suitable as a general alternative to OpenMP for a number of reasons. CUDA is well supported and robust but is a proprietary NVIDIA standard, while OpenCL is an open standard. Both are well suited to a specific set of application problems that can be run on GPUs, but some problems are not suited to GPUs. Further study of the landscape of novel models is recommended.

  2. Nature, nurture, and capital punishment: How evidence of a genetic-environment interaction, future dangerousness, and deliberation affect sentencing decisions.

    Science.gov (United States)

    Gordon, Natalie; Greene, Edie

    2018-01-01

    Research has shown that the low-activity MAOA genotype in conjunction with a history of childhood maltreatment increases the likelihood of violent behaviors. This genetic-environment (G × E) interaction has been introduced as mitigation during the sentencing phase of capital trials, yet there is scant data on its effectiveness. This study addressed that issue. In a factorial design that varied mitigating evidence offered by the defense [environmental (i.e., childhood maltreatment), genetic, G × E, or none] and the likelihood of the defendant's future dangerousness (low or high), 600 mock jurors read sentencing phase evidence in a capital murder trial, rendered individual verdicts, and half deliberated as members of a jury to decide a sentence of death or life imprisonment. The G × E evidence had little mitigating effect on sentencing preferences: participants who received the G × E evidence were no less likely to sentence the defendant to death than those who received evidence of childhood maltreatment or a control group that received neither genetic nor maltreatment evidence. Participants with evidence of a G × E interaction were more likely to sentence the defendant to death when there was a high risk of future dangerousness than when there was a low risk. Sentencing preferences were more lenient after deliberation than before. We discuss limitations and future directions. Copyright © 2017 John Wiley & Sons, Ltd.

  3. A Scheduling-Based Framework for Efficient Massively Parallel Execution, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The barrier to entry creating efficient, scalable applications for heterogeneous supercomputing environments is too high. EM Photonics has found that the majority of...

  4. Games and Entertainment in Ambient Intelligence Environments

    NARCIS (Netherlands)

    Nijholt, Antinus; Reidsma, Dennis; Poppe, Ronald Walter; Aghajan, H.; López-Cózar Delgado, R.; Augusto, J.C.

    2009-01-01

    In future ambient intelligence (AmI) environments we assume intelligence embedded in the environment and its objects (floors, furniture, mobile robots). These environments support their human inhabitants in their activities and interactions by perceiving them through sensors (proximity sensors,

  5. Denmark`s energy futures

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-06-01

    The stated aim of the document published by the Danish Ministry of Environment and Energy and the Danish Energy Agency is that it should form the basis for a broad public debate on the country`s future energy policy. The report has four main objectives: 1. To describe, with emphasis on the environment and the market, challenges that the energy sector will have to face in the future. 2. To illustrate the potentials for saving energy and for utilising energy sources and supply systems. 3. To present two scenarios of extreme developmental positions; the first where maximum effort is expended on increasing energy efficiency and the utilization of renewable energy and the second where no new initiative is taken and change occurs only when progress in available technology is exploited and 4. To raise a number of questions about our future way of living. Following the extensive summary, detailed information is given under the headings of: Challenges of the energy sector, Energy consumption and conservation, Energy consumption in the transport sector, Energy resources, Energy supply and production, Development scenario, and Elements of Strategy. The text is illustrated with maps, graphs and coloured photographs etc. (AB)

  6. All Possible Wars? Toward a Consensus View of the Future Security Environment, 2001-2025

    Science.gov (United States)

    2000-01-01

    technology that the truly unanticipated seems to be crowded out. Predictions from “our future as post-modern cyborgs ” to “the future of God,” would...Hables Grey, “Our Future as Post-Modern Cyborgs ,” in Didsbury, 20–40, and Robert B. Mellert, “The Future of God,” in Didsbury, 76–82. 305 See discussion in

  7. The ultraviolet environment of Mars: biological implications past, present, and future.

    Science.gov (United States)

    Cockell, C S; Catling, D C; Davis, W L; Snook, K; Kepner, R L; Lee, P; McKay, C P

    2000-08-01

    A radiative transfer model is used to quantitatively investigate aspects of the martian ultraviolet radiation environment, past and present. Biological action spectra for DNA inactivation and chloroplast (photosystem) inhibition are used to estimate biologically effective irradiances for the martian surface under cloudless skies. Over time Mars has probably experienced an increasingly inhospitable photobiological environment, with present instantaneous DNA weighted irradiances 3.5-fold higher than they may have been on early Mars. This is in contrast to the surface of Earth, which experienced an ozone amelioration of the photobiological environment during the Proterozoic and now has DNA weighted irradiances almost three orders of magnitude lower than early Earth. Although the present-day martian UV flux is similar to that of early Earth and thus may not be a critical limitation to life in the evolutionary context, it is a constraint to an unadapted biota and will rapidly kill spacecraft-borne microbes not covered by a martian dust layer. Microbial strategies for protection against UV radiation are considered in the light of martian photobiological calculations, past and present. Data are also presented for the effects of hypothetical planetary atmospheric manipulations on the martian UV radiation environment with estimates of the biological consequences of such manipulations.

  8. The ultraviolet environment of Mars: biological implications past, present, and future

    Science.gov (United States)

    Cockell, C. S.; Catling, D. C.; Davis, W. L.; Snook, K.; Kepner, R. L.; Lee, P.; McKay, C. P.

    2000-01-01

    A radiative transfer model is used to quantitatively investigate aspects of the martian ultraviolet radiation environment, past and present. Biological action spectra for DNA inactivation and chloroplast (photosystem) inhibition are used to estimate biologically effective irradiances for the martian surface under cloudless skies. Over time Mars has probably experienced an increasingly inhospitable photobiological environment, with present instantaneous DNA weighted irradiances 3.5-fold higher than they may have been on early Mars. This is in contrast to the surface of Earth, which experienced an ozone amelioration of the photobiological environment during the Proterozoic and now has DNA weighted irradiances almost three orders of magnitude lower than early Earth. Although the present-day martian UV flux is similar to that of early Earth and thus may not be a critical limitation to life in the evolutionary context, it is a constraint to an unadapted biota and will rapidly kill spacecraft-borne microbes not covered by a martian dust layer. Microbial strategies for protection against UV radiation are considered in the light of martian photobiological calculations, past and present. Data are also presented for the effects of hypothetical planetary atmospheric manipulations on the martian UV radiation environment with estimates of the biological consequences of such manipulations.

  9. Capabilities of Future Training Support Packages

    National Research Council Canada - National Science Library

    Burnside, Billy

    2004-01-01

    .... This report identifies and analyzes five key capabilities needed in future TSPs: rapid tailoring or modification, reach, simulated operating environment, performance measurement, and pretests/selection criteria...

  10. IBM PC enhances the world's future

    Science.gov (United States)

    Cox, Jozelle

    1988-01-01

    Although the purpose of this research is to illustrate the importance of computers to the public, particularly the IBM PC, present examinations will include computers developed before the IBM PC was brought into use. IBM, as well as other computing facilities, began serving the public years ago, and is continuing to find ways to enhance the existence of man. With new developments in supercomputers like the Cray-2, and the recent advances in artificial intelligence programming, the human race is gaining knowledge at a rapid pace. All have benefited from the development of computers in the world; not only have they brought new assets to life, but have made life more and more of a challenge everyday.

  11. Cities and environment. Indicators of environmental performance in the 'Cities of the future'; Byer og miljoe : indikatorer for miljoeutviklingen i 'Framtidens byer'

    Energy Technology Data Exchange (ETDEWEB)

    Haagensen, Trine

    2012-07-15

    This report contains selected indicators and statistics that describe the urban environmental status and development in 13 of the largest municipalities in Norway. These cities are part of the program 'Cities of the Future' agreed upon between 13 cities, the private sector and the state, led by the Ministry of the Environment. Cities of the Future had about 1.7 million inhabitants (as of 1 January 2010), equivalent to about 1/3 of the population in Norway. In 2009 the population growth in these municipalities was about 49 per cent of the total population growth. Some of the greatest challenges to combine urban development with environmental considerations are therefore found here. The white paper no. 26 (2006-2007) The government's environmental policy and the state of the environment in Norway, has also added to the importance of the urban environment with a comprehensive description of the land use and transport policy. Good land use management contains indicators related to the density of land use and construction activities within urban settlements. Within urban settlements, the area per inhabitant decreased both within the Cities of the Future and in all municipalities in Norway (2000-2009). The coalescing within the urban settlements decreased per inhabitant (2004-2009), which means that new buildings have been built outside already established urban settlements in this period. Too high density of built-up areas may be at the expense of access to playgrounds, recreational areas or touring grounds, indicators of the population's access to these areas show that there has been a reduction in access in the Cities of the Future as for the municipalities in Norway. Within transport, the focus is on the degree to which the inhabitants choose to use environmentally-friendly transportation instead of cars. Only Oslo has more than 50 per cent of daily travel by environmentally-friendly transportation. Among the Cities of the Future, the use of

  12. Preservation of Near-Earth Space for Future Generations

    Science.gov (United States)

    Simpson, John A.

    2007-05-01

    List of contributors; Preface; Part I. Introduction: 1. Introduction J. A. Simpson; Part II. Defining the Problem: 2. The Earth satellite population: official growth and constituents Nicholas L. Johnson; 3. The current and future environment: an overall assessment Donald J. Kessler; 4. The current and future space debris environment as assessed in Europe Dietrich Rex; 5. Human survivability issues in the low Earth orbit space debris environment Bernard Bloom; 6. Protecting the space environment for astronomy Joel R. Primack; 7. Effects of space debris on commercial spacecraft - the RADARSAT example H. Robert Warren and M. J. Yelle; 8. Potential effects of the space debris environment on military space systems Albert E. Reinhardt; Part III. Mitigation of and Adaptation to the Space Environment: Techniques and Practices: 9. Precluding post-launch fragmentation of delta stages Irvin J. Webster and T. Y. Kawamura; 10. US international and interagency cooperation in orbital debris Daniel V. Jacobs; 11. ESA concepts for space debris mitigation and risk reduction Heiner Klinkrad; 12. Space debris: how France handles mitigation and adaptation Jean-Louis Marcé; 13. Facing seriously the issue of protection of the outer space environment Qi Yong Liang; 14. Space debris - mitigation and adaptation U. R. Rao; 15. Near Earth space contamination and counteractions Vladimir F. Utkin and S. V. Chekalin; 16. The current and future space debris environment as assessed in Japan Susumu Toda; 17. Orbital debris minimization and mitigation techniques Joseph P. Loftus Jr, Philip D. Anz-Meador and Robert Reynolds; Part IV. Economic Issues: 18. In pursuit of a sustainable space environment: economic issues in regulating space debris Molly K. Macauley; 19. The economics of space operations: insurance aspects Christopher T. W. Kunstadter; Part V. Legal Issues: 20. Environmental treatymaking: lessons learned for controlling pollution of outer space Winfried Lang; 21. Regulation of orbital

  13. Mining the Home Environment.

    Science.gov (United States)

    Cook, Diane J; Krishnan, Narayanan

    2014-12-01

    Individuals spend a majority of their time in their home or workplace and for many, these places are our sanctuaries. As society and technology advance there is a growing interest in improving the intelligence of the environments in which we live and work. By filling home environments with sensors and collecting data during daily routines, researchers can gain insights on human daily behavior and the impact of behavior on the residents and their environments. In this article we provide an overview of the data mining opportunities and challenges that smart environments provide for researchers and offer some suggestions for future work in this area.

  14. Mining the Home Environment

    Science.gov (United States)

    Cook, Diane J.; Krishnan, Narayanan

    2014-01-01

    Individuals spend a majority of their time in their home or workplace and for many, these places are our sanctuaries. As society and technology advance there is a growing interest in improving the intelligence of the environments in which we live and work. By filling home environments with sensors and collecting data during daily routines, researchers can gain insights on human daily behavior and the impact of behavior on the residents and their environments. In this article we provide an overview of the data mining opportunities and challenges that smart environments provide for researchers and offer some suggestions for future work in this area. PMID:25506128

  15. The future of energy

    CERN Document Server

    Towler, Brian F

    2014-01-01

    Using the principle that extracting energy from the environment always involves some type of impact on the environment, The Future of Energy discusses the sources, technologies, and tradeoffs involved in meeting the world's energy needs. A historical, scientific, and technical background set the stage for discussions on a wide range of energy sources, including conventional fossil fuels like oil, gas, and coal, as well as emerging renewable sources like solar, wind, geothermal, and biofuels. Readers will learn that there are no truly ""green"" energy sources-all energy usage involves some trad

  16. The Millennial Generation: Developing Leaders for the Future Security Environment

    Science.gov (United States)

    2011-02-15

    While Millenials possess a number of admirable and positive traits that posture them well for the future, there are also some challenges with this...why the military isn‟t producing more of them. The article concluded that the most beneficial experiences were, “ sustained international experience

  17. Future Earth Health Knowledge-Action Network.

    Science.gov (United States)

    Shrivastava, Paul; Raivio, Kari; Kasuga, Fumiko; Tewksbury, Joshua; Haines, Andy; Daszak, Peter

    Future Earth is an international research platform providing the knowledge and support to accelerate our transformations to a sustainable world. Future Earth 2025 Vision identified eight key focal challenges, and challenge #6 is to "Improve human health by elucidating, and finding responses to, the complex interactions amongst environmental change, pollution, pathogens, disease vectors, ecosystem services, and people's livelihoods, nutrition and well-being." Several studies, including the Rockefeller Foundation/Lancet Planetary Health Commission Report of 2015, the World Health Organization/Convention on Biological Diversity report and those by oneHEALTH (former ecoHEALTH), have been conducted over the last 30 years. Knowledge-Action Networks (KANs) are the frameworks to apply Future Earth principles of research to related activities that respond to societal challenges. Future Earth Health Knowledge-Action Network will connect health researchers with other natural and social scientists, health and environmental policy professionals and leaders in government, the private sector and civil society to provide research-based solutions based on better, integrated understanding of the complex interactions between a changing global environment and human health. It will build regional capacity to enhance resilience, protect the environment and avert serious threats to health and will also contribute to achieving Sustainable Development Goals. In addition to the initial partners, Future Earth Health Knowledge-Action Network will further nourish collaboration with other on-going, leading research programmes outside Future Earth, by encouraging them in active participation.

  18. 10. Symposium energy and environment - responsibility for the future

    International Nuclear Information System (INIS)

    2003-01-01

    The symposium discussed important aspects relating to the subject of energy and environment. The detailed and well-funded lectures and statements were received with great interest by the 120 attendants. The discussion focused on problems of power generation and consumption, increased shares of renewable energy sources, ethical and theological questions. The symposium received funds from Deutsche Bundesstiftung Umwelt and was well accepted by the press [de

  19. Radiation Environments for Future Human Exploration Throughout the Solar System.

    Science.gov (United States)

    Schwadron, N.; Gorby, M.; Linker, J.; Riley, P.; Torok, T.; Downs, C.; Spence, H. E.; Desai, M. I.; Mikic, Z.; Joyce, C. J.; Kozarev, K. A.; Townsend, L. W.; Wimmer-Schweingruber, R. F.

    2016-12-01

    Acute space radiation hazards pose one of the most serious risks to future human and robotic exploration. The ability to predict when and where large events will occur is necessary in order to mitigate their hazards. The largest events are usually associated with complex sunspot groups (also known as active regions) that harbor strong, stressed magnetic fields. Highly energetic protons accelerated very low in the corona by the passage of coronal mass ejection (CME)-driven compressions or shocks and from flares travel near the speed of light, arriving at Earth minutes after the eruptive event. Whether these particles actually reach Earth, the Moon, Mars (or any other point) depends on their transport in the interplanetary magnetic field and their magnetic connection to the shock. Recent contemporaneous observations during the largest events in almost a decade show the unique longitudinal distributions of this ionizing radiation broadly distributed from sources near the Sun and yet highly isolated during the passage of CME shocks. Over the last decade, we have observed space weather events as the solar wind exhibits extremely low densities and magnetic field strengths, representing states that have never been observed during the space age. The highly abnormal solar activity during cycles 23 and 24 has caused the longest solar minimum in over 80 years and continues into the unusually small solar maximum of cycle 24. As a result of the remarkably weak solar activity, we have also observed the highest fluxes of galactic cosmic rays in the space age and relatively small particle radiation events. We have used observations from LRO/CRaTER to examine the implications of these highly unusual solar conditions for human space exploration throughout the inner solar system. While these conditions are not a show-stopper for long-duration missions (e.g., to the Moon, an asteroid, or Mars), galactic cosmic ray radiation remains a significant and worsening factor that limits

  20. Future internet architecture and cloud ecosystem: A survey

    Science.gov (United States)

    Wan, Man; Yin, Shiqun

    2018-04-01

    The Internet has gradually become a social infrastructure, the existing TCP/IP architecture faces many challenges. So future Internet architecture become hot research. This paper introduces two ways of idea about the future research of Internet structure system, probes into the future Internet architecture and the environment of cloud ecosystem. Finally, we focuses the related research, and discuss basic principles and problems of OpenStack.

  1. Future Educators' Explaining Voices

    Science.gov (United States)

    de Oliveira, Janaina Minelli; Caballero, Pablo Buenestado; Camacho, Mar

    2013-01-01

    Teacher education programs must offer pre-service students innovative technology-supported learning environments, guiding them in the revision of their preconceptions on literacy and technology. This present paper presents a case study that uses podcast to inquiry into future educators' views on technology and the digital age. Results show future…

  2. The COURAGE Built Environment Outdoor Checklist: an objective built environment instrument to investigate the impact of the environment on health and disability.

    Science.gov (United States)

    Quintas, Rui; Raggi, Alberto; Bucciarelli, Paola; Franco, Maria Grazia; Andreotti, Alessandra; Caballero, Francisco Félix; Olaya, Beatriz; Chatterji, Somnath; Galas, Aleksander; Meriläinen-Porras, Satu; Frisoni, Giovanni; Russo, Emanuela; Minicuci, Nadia; Power, Mick; Leonardi, Matilde

    2014-01-01

    A tool to assess the built environment, which takes into account issues of disability, accessibility and the need for data comparable across countries and populations, is much needed. The Collaborative Research on Ageing in Europe (COURAGE) in Europe Built Environment Outdoor Checklist (CBE-OUT) helps us to understand when features of the neighbourhood environment have either a positive or negative impact on the accessibility of neighbourhoods for healthy ageing. The CBE-OUT is composed of 128 items that can be recorded when present in the evaluated environment. Audits were performed in households randomly selected from each cluster of the sample for Finland, Poland and Spain, following precise rules defined by experts. Global scores were computed both section by section and in the overall checklist, rescaling the resulting scores from 0 (negative environment) to 100 (positive). The total number of completed CBE-OUT checklists was 2452 (Finland, 245; Poland, 972; and Spain, 1235). Mean global score for our sample is 49.3, suggesting an environment composed both of facilitating and hindering features. Significant differences were observed in the built environment features of the three countries and in particular between Finland and the other two. The assessment of features of built environment is crucial when thinking about ageing and enhanced participation. The COURAGE in Europe project developed this tool to collect information on built environment in an objective evaluation of environmental features and is a recommended methodology for future studies. The CBE-OUT checklist is an objective evaluation of the built environment and is centred on technical measurement of features present in the environment and has its foundations in the concepts of disability and accessibility operating in the International Classification of Functioning, Disability and Health (ICF) model. The CBE-OUT checklist can be analysed using both the total score and the single section score

  3. Measured and Modeled Downwelling Far-Infrared Radiances in Very Dry Environments and Calibration Requirements for Future Experiments

    Science.gov (United States)

    Mast, J. C.; Mlynczak, M. G.; Cageao, R.; Kratz, D. P.; Latvakoski, H.; Johnson, D. G.; Mlawer, E. J.; Turner, D. D.

    2016-12-01

    Downwelling radiances measured by the Far-Infrared Spectroscopy of the Troposphere (FIRST) instrument in an environment with integrated precipitable water as low as 0.03 cm are compared with calculated spectra in the far-infrared and mid-infrared. In its current ground-based configuration FIRST was deployed to 5.38 km on Cerro Toco, a mountain in the Atacama Desert of Chile, from August to October 2009. There FIRST took part in the Radiative Heating in Unexplored Bands Campaign Part 2. Water vapor and temperature profiles from an optimal-estimation-based physical retrieval algorithm (using simultaneous radiosonde and multichannel 183 GHz microwave radiometer measurements) are input to the AER Line-by-Line Radiative Transfer Model (LBLRTM) to compute radiances for comparison with FIRST. The AER v3.4 line parameter database is used. The low water vapor amounts and relatively cold atmosphere result in extremely small far-IR radiances (1.5 mW/m2/sr/cm-1) with corresponding brightness temperatures of 120 K. The residual LBLRTM minus FIRST is calculated to assess agreement between the measured and modeled spectra. Uncertainties in both the measured and modeled radiances are accounted for in the comparison. A goal of the deployment and subsequent analysis is the assessment of water vapor spectroscopy in the far-infrared and mid-infrared. While agreement is found between measured and modeled radiances within the combined uncertainties across all spectra, uncertainties in the measured water vapor profiles and from the laboratory calibration exceed those associated with water vapor spectroscopy in this very low radiance environment. Consequently, no improvement in water vapor spectroscopy is afforded by these measurements. However, we use these results to place requirements on instrument calibration accuracy and water vapor profile accuracy for future campaigns to similarly dry environments. Instrument calibration uncertainty needs to be at 2% (1-sigma) of measured radiance

  4. Future trends in reprocessing

    International Nuclear Information System (INIS)

    Rouyer, H.

    1994-01-01

    This paper about future trends in reprocessing essentially reflects French experience and points of view as an example of countries which, like England and Japan, consider that reprocessing is the best solution for the back end of the fuel cycle. In order to know what the future will be, it is necessary to look back at the past and try to find what have been the main reasons for evolution in that period. For reprocessing, it appears that these motivations have been 'safety and economics'. They will remain the motivations for the future. In addition, new motivations for development are starting to appear which are still imprecise but can be expressed as follows: 'which guarantees will public opinion require in order to be convinced that solutions for waste management, proposed by specialists shall ensure that a healthy environment is preserved for the use of future generations'. Consequently the paper examines successively the evolution of reprocessing in the recent past, what the immediate future could be and finally what should be necessary in the long term. (Author)

  5. Dual Axis Controller for Extreme Environments, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The Dual Axis Controller for Extreme Environments (DACEE) addresses a critical need of NASA's future exploration plans to investigate extreme environments within our...

  6. Human-Automation Cooperation for Separation Assurance in Future NextGen Environments

    Science.gov (United States)

    Mercer, Joey; Homola, Jeffrey; Cabrall, Christopher; Martin, Lynne; Morey, Susan; Gomez, Ashley; Prevot, Thomas

    2014-01-01

    A 2012 Human-In-The-Loop air traffic control simulation investigated a gradual paradigm-shift in the allocation of functions between operators and automation. Air traffic controllers staffed five adjacent high-altitude en route sectors, and during the course of a two-week experiment, worked traffic under different function-allocation approaches aligned with four increasingly mature NextGen operational environments. These NextGen time-frames ranged from near current-day operations to nearly fully-automated control, in which the ground systems automation was responsible for detecting conflicts, issuing strategic and tactical resolutions, and alerting the controller to exceptional circumstances. Results indicate that overall performance was best in the most automated NextGen environment. Safe operations were achieved in this environment for twice todays peak airspace capacity, while being rated by the controllers as highly acceptable. However, results show that sector operations were not always safe; separation violations did in fact occur. This paper will describe in detail the simulation conducted, as well discuss important results and their implications.

  7. Application experiences with the Globus toolkit.

    Energy Technology Data Exchange (ETDEWEB)

    Brunett, S.

    1998-06-09

    The Globus grid toolkit is a collection of software components designed to support the development of applications for high-performance distributed computing environments, or ''computational grids'' [14]. The Globus toolkit is an implementation of a ''bag of services'' architecture, which provides application and tool developers not with a monolithic system but rather with a set of stand-alone services. Each Globus component provides a basic service, such as authentication, resource allocation, information, communication, fault detection, and remote data access. Different applications and tools can combine these services in different ways to construct ''grid-enabled'' systems. The Globus toolkit has been used to construct the Globus Ubiquitous Supercomputing Testbed, or GUSTO: a large-scale testbed spanning 20 sites and included over 4000 compute nodes for a total compute power of over 2 TFLOPS. Over the past six months, we and others have used this testbed to conduct a variety of application experiments, including multi-user collaborative environments (tele-immersion), computational steering, distributed supercomputing, and high throughput computing. The goal of this paper is to review what has been learned from these experiments regarding the effectiveness of the toolkit approach. To this end, we describe two of the application experiments in detail, noting what worked well and what worked less well. The two applications are a distributed supercomputing application, SF-Express, in which multiple supercomputers are harnessed to perform large distributed interactive simulations; and a tele-immersion application, CAVERNsoft, in which the focus is on connecting multiple people to a distributed simulated world.

  8. RIIHI. Radical innovations for combatting climate change. Results from Futures Clinique by the Ministry of the Environment; RIIHI. Radikaalit innovaatiot ilmastonmuutoksen hillitsemiseksi. RIIHI-tulevaisuusklinikan tulokset

    Energy Technology Data Exchange (ETDEWEB)

    Heinonen, S.; Keskinen, A.; Ruotsalainen, J.

    2011-07-01

    This report presents the starting poits, implementations and the results of the Futures Clinique commissioned by the Ministry of the Environment and conducted by Finland Futures Research Centre. The theme for the clinique was 'Radical innovations to constrain the climate change by the year 2050'. The focus was set on the households, since the attention has previously been on the industry and production. The time frame from the present to 2050 was chosen because achieving the goals set by EU's climate policy requires emission cuts of 80% by the year 2050. The carbon footprint of households was examined from the perspectives of energy, traffic, food and water and leisure, entertainment and communications. As desirable goals regarding these aspects the participants studied economic efficiency, sustainability and durability, safety and security, healthinesss and comfortable living. The groups discussed also on converging NBIC technologies (nano, bio, information and cognitive technologies) that could enable these goals. According to the concept of Futures Clinique the work was conducted in groups, in which topics attuned by pre-tasks done by the participants were worked on by the methods of the Futures Wheel and the Futures Table. From each group's working resulted various both technological and socio-cultural radical innovations. (orig.)

  9. Scoping the future: a model for integrating learning environments

    OpenAIRE

    Honeychurch, Sarah; Barr, Niall

    2013-01-01

    The Virtual Learning Environment (VLE) has become synonymous with online learning in HE.However, with the rise of Web 2.0 technologies, social networking tools and cloud computing thearchitecture of the current VLEs is increasingly anachronistic. This paper suggests an alternative tothe traditional VLE: one which allows for flexibility and adaptation to the needs of individual teachers,while remaining resilient and providing students with a seamless experience. We present a prototypeof our vi...

  10. Dynamic Optical Networks for Future Internet Environments

    Science.gov (United States)

    Matera, Francesco

    2014-05-01

    This article reports an overview on the evolution of the optical network scenario taking into account the exponential growth of connected devices, big data, and cloud computing that is driving a concrete transformation impacting the information and communication technology world. This hyper-connected scenario is deeply affecting relationships between individuals, enterprises, citizens, and public administrations, fostering innovative use cases in practically any environment and market, and introducing new opportunities and new challenges. The successful realization of this hyper-connected scenario depends on different elements of the ecosystem. In particular, it builds on connectivity and functionalities allowed by converged next-generation networks and their capacity to support and integrate with the Internet of Things, machine-to-machine, and cloud computing. This article aims at providing some hints of this scenario to contribute to analyze impacts on optical system and network issues and requirements. In particular, the role of the software-defined network is investigated by taking into account all scenarios regarding data centers, cloud computing, and machine-to-machine and trying to illustrate all the advantages that could be introduced by advanced optical communications.

  11. The future of levies in a digital environment: final report

    NARCIS (Netherlands)

    Hugenholtz, P.B.; Guibault, L.; van Geffen, S.

    2003-01-01

    Copyright levy systems have been premised on the assumption that private copying of protected works cannot be controlled and exploited individually. With the advent of digital rights management (DRM), this assumption must be re-examined. In the digital environment, technical protection measures and

  12. Mantle Convection on Modern Supercomputers

    Science.gov (United States)

    Weismüller, J.; Gmeiner, B.; Huber, M.; John, L.; Mohr, M.; Rüde, U.; Wohlmuth, B.; Bunge, H. P.

    2015-12-01

    Mantle convection is the cause for plate tectonics, the formation of mountains and oceans, and the main driving mechanism behind earthquakes. The convection process is modeled by a system of partial differential equations describing the conservation of mass, momentum and energy. Characteristic to mantle flow is the vast disparity of length scales from global to microscopic, turning mantle convection simulations into a challenging application for high-performance computing. As system size and technical complexity of the simulations continue to increase, design and implementation of simulation models for next generation large-scale architectures is handled successfully only in an interdisciplinary context. A new priority program - named SPPEXA - by the German Research Foundation (DFG) addresses this issue, and brings together computer scientists, mathematicians and application scientists around grand challenges in HPC. Here we report from the TERRA-NEO project, which is part of the high visibility SPPEXA program, and a joint effort of four research groups. TERRA-NEO develops algorithms for future HPC infrastructures, focusing on high computational efficiency and resilience in next generation mantle convection models. We present software that can resolve the Earth's mantle with up to 1012 grid points and scales efficiently to massively parallel hardware with more than 50,000 processors. We use our simulations to explore the dynamic regime of mantle convection and assess the impact of small scale processes on global mantle flow.

  13. Language Learning in Virtual Reality Environments: Past, Present, and Future

    Science.gov (United States)

    Lin, Tsun-Ju; Lan, Yu-Ju

    2015-01-01

    This study investigated the research trends in language learning in a virtual reality environment by conducting a content analysis of findings published in the literature from 2004 to 2013 in four top ranked computer-assisted language learning journals: "Language Learning & Technology," "CALICO Journal," "Computer…

  14. The Future Role of Librarians in the Virtual Library Environment.

    Science.gov (United States)

    Burke, Liz

    2002-01-01

    Considers the role of librarians in a virtual library environment. Highlights include providing intellectual access to information in any format; evaluating available sources of information; organizing information; ensuring the preservation of information; providing specialized staff to help meet information needs; and the economic impact of…

  15. Navy Telemedicine: Current Research and Future Directions

    National Research Council Canada - National Science Library

    Reed, Cheryl

    2002-01-01

    .... This report reviews military and civilian models for evaluating telemedicine systems in order to determine future directions for Navy telemedicine research within the current funding environment...

  16. Humor techniques: from real world and game environments to smart environments

    NARCIS (Netherlands)

    Nijholt, Antinus; Streitz, Norbert; Markopoulos, Panos

    In this paper we explore how future smart environments can be given a sense of humor. Humor requires smartness. Entering witty remarks in a conversation requires understanding of the conversation, the conversational partner, the context and the history of the conversation. We can try to model

  17. Superconductor Digital Electronics: -- Current Status, Future Prospects

    Science.gov (United States)

    Mukhanov, Oleg

    2011-03-01

    Two major applications of superconductor electronics: communications and supercomputing will be presented. These areas hold a significant promise of a large impact on electronics state-of-the-art for the defense and commercial markets stemming from the fundamental advantages of superconductivity: simultaneous high speed and low power, lossless interconnect, natural quantization, and high sensitivity. The availability of relatively small cryocoolers lowered the foremost market barrier for cryogenically-cooled superconductor electronic systems. These fundamental advantages enabled a novel Digital-RF architecture - a disruptive technological approach changing wireless communications, radar, and surveillance system architectures dramatically. Practical results were achieved for Digital-RF systems in which wide-band, multi-band radio frequency signals are directly digitized and digital domain is expanded throughout the entire system. Digital-RF systems combine digital and mixed signal integrated circuits based on Rapid Single Flux Quantum (RSFQ) technology, superconductor analog filter circuits, and semiconductor post-processing circuits. The demonstrated cryocooled Digital-RF systems are the world's first and fastest directly digitizing receivers operating with live satellite signals, enabling multi-net data links, and performing signal acquisition from HF to L-band with 30 GHz clock frequencies. In supercomputing, superconductivity leads to the highest energy efficiencies per operation. Superconductor technology based on manipulation and ballistic transfer of magnetic flux quanta provides a superior low-power alternative to CMOS and other charge-transfer based device technologies. The fundamental energy consumption in SFQ circuits defined by flux quanta energy 2 x 10-19 J. Recently, a novel energy-efficient zero-static-power SFQ technology, eSFQ/ERSFQ was invented, which retains all advantages of standard RSFQ circuits: high-speed, dc power, internal memory. The

  18. Innovation and future in Westinghouse

    International Nuclear Information System (INIS)

    Congedo, T.; Dulloo, A.; Goosen, J.; Llovet, R.

    2007-01-01

    For the past six years, Westinghouse has used a Road Map process to direct technology development in a way that integrates the efforts of our businesses to addresses the needs of our customers and respond to significant drivers in the evolving business environment. As the nuclear industry experiences a resurgence, it is ever more necessary that we increase our planning horizon to 10-15 years in the future so as to meet the expectations of our customers. In the Future Point process, driven by the methods of Design for Six Sigma (DFSS), Westinghouse considers multiple possible future scenarios to plan long term evolutionary and revolutionary development that can reliably create the major products and services of the future market. the products and services of the future stretch the imagination from what we provide today. However, the journey to these stretch targets prompts key development milestones that will help deliver ideas useful for nearer term products. (Author) 1 refs

  19. Change in ocean subsurface environment to suppress tropical cyclone intensification under global warming

    Science.gov (United States)

    Huang, Ping; Lin, I. -I; Chou, Chia; Huang, Rong-Hui

    2015-01-01

    Tropical cyclones (TCs) are hazardous natural disasters. Because TC intensification is significantly controlled by atmosphere and ocean environments, changes in these environments may cause changes in TC intensity. Changes in surface and subsurface ocean conditions can both influence a TC's intensification. Regarding global warming, minimal exploration of the subsurface ocean has been undertaken. Here we investigate future subsurface ocean environment changes projected by 22 state-of-the-art climate models and suggest a suppressive effect of subsurface oceans on the intensification of future TCs. Under global warming, the subsurface vertical temperature profile can be sharpened in important TC regions, which may contribute to a stronger ocean coupling (cooling) effect during the intensification of future TCs. Regarding a TC, future subsurface ocean environments may be more suppressive than the existing subsurface ocean environments. This suppressive effect is not spatially uniform and may be weak in certain local areas. PMID:25982028

  20. Preservation of Built Environments

    DEFF Research Database (Denmark)

    Pilegaard, Marie Kirstine

    When built environments and recently also cultural environments are to be preserved, the historic and architectural values are identified as the key motivations. In Denmark the SAVE system is used as a tool to identify architectural values, but in recent years it has been criticized for having...... architectural value in preservation work as a matter of maintaining the buildings -as keeping them "alive" and allowing this to continue in the future. The predominantly aesthetic preservation approach will stop the buildings' life process, which is the same as - "letting them die". Finnebyen in Aarhus...... is an example of a residential area, where the planning authority currently has presented a preservational district plan, following guidelines from the SAVE method. The purpose is to protect the area's architectural values in the future. The predominantly aesthetic approach is here used coupled to the concept...

  1. Identification of future environmental challenges in Pakistan by 2025 ...

    African Journals Online (AJOL)

    Technology foresight on environment sector was carried out under the supervision of Pakistan Technology Board on the theme “Environment 2025: Our future, our choices”. Social, technological, environmental, economical, political and values (STEEPV) is an internationally recognized tool for brainstorming used in ...

  2. Traces of the Gods: Ancient Astronauts as a Vision of Our Future

    OpenAIRE

    Richter, Jonas

    2012-01-01

    Ancient astronaut speculation (also called paleo-SETI), often labeled pseudoscience or modern myth, still awaits in-depth research. Focusing on Erich von Däniken and reconstructing his views on god and cosmology from scattered statements throughout his books, this article analyzes his attitudes toward science and religion as well as his concepts of god and creation. In this regard, his pantheistic combination of the big bang theory with a model of god as supercomputer is of special interest. ...

  3. Security Guards for the Future Web

    National Research Council Canada - National Science Library

    Reed, Nancy; Bryson, Dave; Garriss, James; Gosnell, Steve; Heaton, Brook; Huber, Gary; Jacobs, David; Pulvermacher, Mary; Semy, Salim; Smith, Chad; Standard, John

    2004-01-01

    .... Guard technology needs to keep pace with the evolving Web environment. The authors conjectured that a family of security guard services would be needed to provide the full range of functionality necessary to support the future Web...

  4. The development of ecological environment in China based on the system dynamics method from the society, economy and environment perspective.

    Science.gov (United States)

    Guang, Yang; Ge, Song; Han, Liu

    2016-01-01

    The harmonious development in society, economy and environment are crucial to regional sustained boom. However, the society, economy and environment are not respectively independent, but both mutually promotes one which, or restrict mutually complex to have the long-enduring overall process. The present study is an attempt to investigate the relationship and interaction of society, economy and environment in China based on the data from 2004 to 2013. The principal component analysis (PCA) model was employed to identify the main factors effecting the society, economy and environment subsystems, and SD (system dynamics) method used to carry out dynamic assessment for future state of sustainability from society, economy and environment perspective with future indicator values. Sustainable development in China was divided in the study into three phase from 2004 to 2013 based competitive values of these three subsystems. According to the results of PCA model, China is in third phase, and the economy growth is faster than the environment development, while the social development still maintained a steady and rapid growth, implying that the next step for sustainable development in China should focus on society development, especially the environment development.

  5. High-performance modeling of CO2 sequestration by coupling reservoir simulation and molecular dynamics

    KAUST Repository

    Bao, Kai; Yan, Mi; Lu, Ligang; Allen, Rebecca; Salam, Amgad; Jordan, Kirk E.; Sun, Shuyu

    2013-01-01

    multicomponent compositional flow simulation to handle more complicated physical process in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our

  6. Distributed computing environments for future space control systems

    Science.gov (United States)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  7. PC as physics computer for LHC?

    CERN Document Server

    Jarp, S; Simmins, A; Yaari, R; Jarp, Sverre; Tang, Hong; Simmins, Antony; Yaari, Refael

    1995-01-01

    In the last five years, we have seen RISC workstations take over the computing scene that was once controlled by mainframes and supercomputers. In this paper we will argue that the same phenomenon might happen again. A project, active since March this year in the Physics Data Processing group of CERN's CN division is described where ordinary desktop PCs running Windows (NT and 3.11) have been used for creating an environment for running large LHC batch jobs (initially the DICE simulation job of Atlas). The problems encountered in porting both the CERN library and the specific Atlas codes are described together with some encouraging benchmark results when comparing to existing RISC workstations in use by the Atlas collaboration. The issues of establishing the batch environment (Batch monitor, staging software, etc.) are also covered. Finally a quick extrapolation of commodity computing power available in the future is touched upon to indicate what kind of cost envelope could be sufficient for the simulation fa...

  8. The transition of GTDS to the Unix workstation environment

    Science.gov (United States)

    Carter, D.; Metzinger, R.; Proulx, R.; Cefola, P.

    1995-01-01

    Future Flight Dynamics systems should take advantage of the possibilities provided by current and future generations of low-cost, high performance workstation computing environments with Graphical User Interface. The port of the existing mainframe Flight Dynamics systems to the workstation environment offers an economic approach for combining the tremendous engineering heritage that has been encapsulated in these systems with the advantages of the new computing environments. This paper will describe the successful transition of the Draper Laboratory R&D version of GTDS (Goddard Trajectory Determination System) from the IBM Mainframe to the Unix workstation environment. The approach will be a mix of historical timeline notes, descriptions of the technical problems overcome, and descriptions of associated SQA (software quality assurance) issues.

  9. The future of the global environment. A model-based analysis supporting UNEP's first global environment outlook

    International Nuclear Information System (INIS)

    Bakkes, J.; Van Woerden, J.; Alcamo, J.; Berk, M.; Bol, P.; Van den Born, G.J.; Ten Brink, B.; Hettelingh, J.P.; Niessen, L.; Langeweg, F.; Swart, R.

    1997-01-01

    Integrated assessments in support of environmental policy have been applied to a number of countries and regions, and to international negotiations. UNEP's first Global Environment Outlook (GEO-1) can be seen as a step towards making the tool of integrated assessment more widely available as a means for focusing action. This technical report documents RIVM's contribution to the GEO-1 report, focusing on the subject 'looking ahead'. It is illustrated that a 'what if' analysis helps to look beyond the delays in environmental and resource processes. This report illustrates that integrated assessment and modelling techniques can be excellent tools for environment and development policy-setting. The methodology, however, will need to be further developed and adapted to the realities and expectations of diverse regions, incorporating alternative policy strategies and development scenarios. This report focuses primarily on the period 1970-2015, because reliable historical data are often only generally available from 1970 onwards and the year 2015 is believed to match the time perspective of decision-makers. The findings of the analysis are reported in terms of six regions, corresponding with the division of the UNEP regional offices. Questions asked are: how will socioeconomic driving forces affect freshwater and land resources, and how will these changes mutually interact, and why are these changes important for society? Chapter 2 deals with the development of the social and economic driving forces. In the Chapters 3 and 4 it is discussed how this pressure influences selected aspects of the environment. Chapter 3 alone addresses the importance of selected elements of the interacting global element cycles for environmental quality, while Chapter 4 addresses land resources, their potential for food production and associated dependence on freshwater resources. The impacts on selected components of natural areas (Chapter 5) and society (Chapter 6) are subsequently addressed

  10. The Future of Deterrent Capability for Medium-Sized Western Powers in the New Environment

    International Nuclear Information System (INIS)

    Quinlan, Michael

    2001-01-01

    What should be the longer-term future for the nuclear-weapons capabilities of France and the United Kingdom? I plan to tackle the subject in concrete terms. My presentation will be divided into three parts, and, though they are distinct rather than separate, they interact extensively. The first and largest part will relate to strategic context and concept: what aims, justifications and limitations should guide the future, or the absence of a future, for our capabilities? The second part, a good deal briefer, will be the practical content and character of the capabilities: what questions for decision will arise, and in what timescale, about the preservation, improvement or adjustment of the present capabilities? And the third part, still more briefly, will concern the political and institutional framework into which their future should or might be fitted. (author)

  11. Environment report 1990 of the Federal Minister for the Environment, Nature Protection and Reactor Safety

    International Nuclear Information System (INIS)

    1990-01-01

    The 'Environment Report 1990' describes the environmental situation in the Federal Republic of Germany; draws a balance of environmental policy measures taken and introduced; gives information on future fields of action in environmental policy. The 'Environment Report 1990' also deals with the 'Environment Expert Opinion 1987', produced by the board of experts on environmental questions. It contains surveys of the following sectors: Protection against hazardous materials air pollution abatement, water management, waste management, nature protection and preservation of the countryside, soil conservation, noise abatement, radiation protection, reactor safety. A separate part of the 'Environment Report 1990' deals with the progress made in 'interdisciplinary fields' (general law on the protection of the environment, instruments of environmental policy, environmental information and environmental research, transfrontier environmental policy). (orig./HP) [de

  12. Global environment and cogeneration

    International Nuclear Information System (INIS)

    Miyahara, Atsushi

    1992-01-01

    The environment problems on global scale have been highlighted in addition to the local problems due to the rapid increase of population, the increase of energy demand and so on. The global environment summit was held in Brazil. Now, global environment problems are the problems for mankind, and their importance seems to increase toward 21st century. In such circumstances, cogeneration can reduce carbon dioxide emission in addition to energy conservation, therefore, attention has been paid as the countermeasure for global environment. The background of global environment problems is explained. As to the effectiveness of cogeneration for global environment, the suitability of city gas to environment, energy conservation, the reduction of carbon dioxide and nitrogen oxides emission are discussed. As for the state of spread of cogeneration, as of March, 1992, those of 2250 MW in terms of power generation capacity have been installed in Japan. It is forecast that cogeneration will increase hereafter. As the future systems of cogeneration, city and industry energy center conception, industrial repowering, multiple house cogeneration and fuel cells are described. (K.I.)

  13. Growing America's Energy Future

    Energy Technology Data Exchange (ETDEWEB)

    None

    2016-06-01

    The emerging U.S. bioenergy industry provides a secure and growing supply of transportation fuels, biopower, and bioproducts produced from a range of abundant, renewable biomass resources. Bioenergy can help ensure a secure, sustainable, and economically sound future by reducing U.S. dependence on foreign oil, developing domestic clean energy sources, and generating domestic green jobs. Bioenergy can also help address growing concerns about climate change by reducing greenhouse gas emissions to create a healthier environment for current and future generations.

  14. IMPROVING THE SCHOOL ENVIRONMENT.

    Science.gov (United States)

    PETERS, JON S.; SCHNEIDER, RAYMOND C.

    GUIDELINES FOR CREATING IMPROVED EDUCATIONAL ENVIRONMENTS ARE PRESENTED WITH SUPPLEMENTARY DRAWINGS, DIAGRAMS, AND PHOTOGRAPHS. POLICY DECISIONS ARE RELATED TO--(1) THE SCHOOL'S RESPONSIBILITY TO THE FUTURE, (2) INDUSTRY'S RULE IN EDUCATION, AND (3) BUILDING PROGRAM RESPONSIBILITIES. EDUCATIONAL PLANNING IS DISCUSSED IN TERMS OF--(1) ART…

  15. VRML and Collaborative Environments: New Tools for Networked Visualization

    Science.gov (United States)

    Crutcher, R. M.; Plante, R. L.; Rajlich, P.

    We present two new applications that engage the network as a tool for astronomical research and/or education. The first is a VRML server which allows users over the Web to interactively create three-dimensional visualizations of FITS images contained in the NCSA Astronomy Digital Image Library (ADIL). The server's Web interface allows users to select images from the ADIL, fill in processing parameters, and create renderings featuring isosurfaces, slices, contours, and annotations; the often extensive computations are carried out on an NCSA SGI supercomputer server without the user having an individual account on the system. The user can then download the 3D visualizations as VRML files, which may be rotated and manipulated locally on virtually any class of computer. The second application is the ADILBrowser, a part of the NCSA Horizon Image Data Browser Java package. ADILBrowser allows a group of participants to browse images from the ADIL within a collaborative session. The collaborative environment is provided by the NCSA Habanero package which includes text and audio chat tools and a white board. The ADILBrowser is just an example of a collaborative tool that can be built with the Horizon and Habanero packages. The classes provided by these packages can be assembled to create custom collaborative applications that visualize data either from local disk or from anywhere on the network.

  16. A Comparison of Three Microkernels

    NARCIS (Netherlands)

    Tanenbaum, A.S.

    The future of supercomputing lies in massively parallel computers. The nodes of these machines will need a different kind of operating system than current computers have. Many researchers in the field believe that microkernels provide the kind of functionality and performance required. In this paper

  17. TRAINING OF FUTURE TEACHER OF INFORMATICS TO WORK IN MODERN INFORMATION AND EDUCATIONAL ENVIRONMENT OF SCHOOL

    Directory of Open Access Journals (Sweden)

    V. Shovkun

    2015-05-01

    Full Text Available The article analyzes the impact of new information and communication technologies in formation trends for changes in the education system. An important factor according to specific trends and satisfying the educational needs of students in the school is to create an information and communication environment (ICE. This requires the presence in educational institutions the specialists able to advise the management on the choice of hardware and software, to the design, implementation, configuration programs, serve teaching aid and others. Anonymous survey of teachers of Informatics of Kherson region is conducted and it revealed that in most cases the defined functions are performed exactly by teachers of Informatics. Only a few schools have special workers or appeal to workers or companies that provide related services. Therefore, special importance is the preparation of future teachers of Informatics for continuous tracking trends of educational technologies, self-reliant mastering of new services and applications, finding ways for their implementation in the educational process of the school, consulting colleagues, conducting explanatory work with parents. Also, in the survey we determined the level of equipment and working conditions of teachers of Informatics at school and at home.

  18. The role of nuclear power in meeting future energy demands

    International Nuclear Information System (INIS)

    Fuchs, K.

    1977-01-01

    Future energy demands and possibilities of meeting them are outlined. The current status and future developments of nuclear energetics all over the world and in the CMEA member states are discussed considering reactor safety, fission product releases, and thermal pollution of the environment

  19. Akuna: An Open Source User Environment for Managing Subsurface Simulation Workflows

    Science.gov (United States)

    Freedman, V. L.; Agarwal, D.; Bensema, K.; Finsterle, S.; Gable, C. W.; Keating, E. H.; Krishnan, H.; Lansing, C.; Moeglein, W.; Pau, G. S. H.; Porter, E.; Scheibe, T. D.

    2014-12-01

    The U.S. Department of Energy (DOE) is investing in development of a numerical modeling toolset called ASCEM (Advanced Simulation Capability for Environmental Management) to support modeling analyses at legacy waste sites. ASCEM is an open source and modular computing framework that incorporates new advances and tools for predicting contaminant fate and transport in natural and engineered systems. The ASCEM toolset includes both a Platform with Integrated Toolsets (called Akuna) and a High-Performance Computing multi-process simulator (called Amanzi). The focus of this presentation is on Akuna, an open-source user environment that manages subsurface simulation workflows and associated data and metadata. In this presentation, key elements of Akuna are demonstrated, which includes toolsets for model setup, database management, sensitivity analysis, parameter estimation, uncertainty quantification, and visualization of both model setup and simulation results. A key component of the workflow is in the automated job launching and monitoring capabilities, which allow a user to submit and monitor simulation runs on high-performance, parallel computers. Visualization of large outputs can also be performed without moving data back to local resources. These capabilities make high-performance computing accessible to the users who might not be familiar with batch queue systems and usage protocols on different supercomputers and clusters.

  20. The ASCI Network for SC '99: A Step on the Path to a 100 Gigabit Per Second Supercomputing Network

    Energy Technology Data Exchange (ETDEWEB)

    PRATT,THOMAS J.; TARMAN,THOMAS D.; MARTINEZ,LUIS M.; MILLER,MARC M.; ADAMS,ROGER L.; CHEN,HELEN Y.; BRANDT,JAMES M.; WYCKOFF,PETER S.

    2000-07-24

    This document highlights the Discom{sup 2}'s Distance computing and communication team activities at the 1999 Supercomputing conference in Portland, Oregon. This conference is sponsored by the IEEE and ACM. Sandia, Lawrence Livermore and Los Alamos National laboratories have participated in this conference for eleven years. For the last four years the three laboratories have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives rubric. Communication support for the ASCI exhibit is provided by the ASCI DISCOM{sup 2} project. The DISCOM{sup 2} communication team uses this forum to demonstrate and focus communication and networking developments within the community. At SC 99, DISCOM built a prototype of the next generation ASCI network demonstrated remote clustering techniques, demonstrated the capabilities of the emerging Terabit Routers products, demonstrated the latest technologies for delivering visualization data to the scientific users, and demonstrated the latest in encryption methods including IP VPN technologies and ATM encryption research. The authors also coordinated the other production networking activities within the booth and between their demonstration partners on the exhibit floor. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia's overall strategies in ASCI networking.

  1. The fractal feature and price trend in the gold future market at the Shanghai Futures Exchange (SFE)

    Science.gov (United States)

    Wu, Binghui; Duan, Tingting

    2017-05-01

    The price of gold future is affected by many factors, which include the fluctuation of gold price and the change of trading environment. Fractal analysis can help investors gain better understandings of the price fluctuation and make reasonable investment decisions in the gold future market. After analyzing gold future price from January 2th, 2014 to April 12th, 2016 at the Shanghai Futures Exchange (SFE) in China, the conclusion is drawn that the gold future market has sustainability in each trading day, with all Hurst indexes greater than 0.5. The changing features of Hurst index indicate the sustainability of gold future market is strengthened first and weakened then. As a complicatedly nonlinear system, the gold future market can be well reflected by Elman neural network, which is capable of memorizing previous prices and particularly suited for forecasting time series in comparison with other types of neural networks. After analyzing the price trend in the gold future market, the results show that the relative error between the actual value of gold future and the predictive value of Elman neural network is smaller. This model that has a better performance in data fitting and predication, can help investors analyze and foresee the price tendency in the gold future market.

  2. Micro-computer cards for hard industrial environment

    Energy Technology Data Exchange (ETDEWEB)

    Breton, J M

    1984-03-15

    Approximately 60% of present or future distributed systems have, or will have, operational units installed in hard environments. In these applications, which include canalization and industrial motor control, robotics and process control, systems must be easily applied in environments not made for electronic use. The development of card systems in this hard industrial environment, which is found in petrochemical industry and mines is described. National semiconductor CIM card system CMOS technology allows the real time micro computer application to be efficient and functional in hard industrial environments.

  3. Contemporary state of spacecraft/environment interaction research

    CERN Document Server

    Novikov, L S

    1999-01-01

    Various space environment effects on spacecraft materials and equipment, and the reverse effects of spacecrafts and rockets on space environment are considered. The necessity of permanent updating and perfection of our knowledge on spacecraft/environment interaction processes is noted. Requirements imposed on models of space environment in theoretical and experimental researches of various aspects of the spacecraft/environment interaction problem are formulated. In this field, main problems which need to be solved today and in the nearest future are specified. The conclusion is made that the joint analysis of both aspects of spacecraft/environment interaction problem promotes the most effective solution of the problem.

  4. Smart city – future city? smart city 20 as a livable city and future market

    CERN Document Server

    Etezadzadeh, Chirine

    2016-01-01

    The concept of a livable smart city presented in this book highlights the relevance of the functionality and integrated resilience of viable cities of the future. It critically examines the progressive digitalization that is taking place and identifies the revolutionized energy sector as the basis of urban life. The concept is based on people and their natural environment, resulting in a broader definition of sustainability and an expanded product theory. Smart City 2.0 offers its residents many opportunities and is an attractive future market for innovative products and services. However, it presents numerous challenges for stakeholders and product developers.

  5. Against Generationism. A Conceptual Outline of Justice for Future Generations

    Directory of Open Access Journals (Sweden)

    Dejan Savić

    2013-03-01

    Full Text Available Humanity faces a global ecological crisis in the context of climate change which challenges established forms of political thought and action. The discussion of justice is applied to the future, where we understand time and the natural environment as a common bond between people from different periods. We put today’s generation in a relationship with the generations in the near and more distant future. The term »generacism«, describing the current way of thinking as another form of discrimination, allows us to show the inadequacy of our attitudes towards future generations. By destroying the global environment, we create injustice towards future generations on the basis of the time of peoples’ birth. In this context, time is understood as an arbitrary circumstance, which does not suffice as a basis for discriminating between people. We defend the concept of intergenerational justice that gives the state the responsibility for implementing environmental protection measures in order to protect future generations and eliminate generacism from our society and economy. We propose the so-called green state, which bases environmental protection measures on fairness to future generations.

  6. Virtual laboratories : comparability of real and virtual environments for environmental psychology

    NARCIS (Netherlands)

    Kort, de Y.A.W.; IJsselsteijn, W.A.; Kooijman, J.M.A.; Schuurmans, Y.

    2003-01-01

    Virtual environments have the potential to become important new research tools in environment behavior research. They could even become the future (virtual) laboratories, if reactions of people to virtual environments are similar to those in real environments. The present study is an exploration of

  7. The Evolution in Military Affairs: Shaping the Future U.S. Armed Forces

    National Research Council Canada - National Science Library

    Lovelace, Douglas

    1997-01-01

    ... the nation will require in about 20 years. He defines national security interests, describes the future international security environment, identifies derivative future national security objectives and strategic concepts, and discerns...

  8. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Belov, S; Kaplin, V; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2011-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  9. Energy, Environment and IMCC

    DEFF Research Database (Denmark)

    Mogensen, Mogens Bjerg

    2012-01-01

    This paper gives a brief description of the important role that the ionic and mixed conducting ceramics (IMCC) type of materials will play in the R&D of energy and environment technologies of the - presumably - near future. IMCC materials based technologies for energy harvesting, conversion...... and storage as well as for monitoring and protection of our environment are exemplified. The strong impact of the international IMCC research on development of devices based on such materials is illustrated, and some recent trends in the scientific exploration of IMCC are highlighted. Important groups...

  10. The future of future-oriented cognition in non-humans: theory and the empirical case of the great apes.

    Science.gov (United States)

    Osvath, Mathias; Martin-Ordas, Gema

    2014-11-05

    One of the most contested areas in the field of animal cognition is non-human future-oriented cognition. We critically examine key underlying assumptions in the debate, which is mainly preoccupied with certain dichotomous positions, the most prevalent being whether or not 'real' future orientation is uniquely human. We argue that future orientation is a theoretical construct threatening to lead research astray. Cognitive operations occur in the present moment and can be influenced only by prior causation and the environment, at the same time that most appear directed towards future outcomes. Regarding the current debate, future orientation becomes a question of where on various continua cognition becomes 'truly' future-oriented. We question both the assumption that episodic cognition is the most important process in future-oriented cognition and the assumption that future-oriented cognition is uniquely human. We review the studies on future-oriented cognition in the great apes to find little doubt that our closest relatives possess such ability. We conclude by urging that future-oriented cognition not be viewed as expression of some select set of skills. Instead, research into future-oriented cognition should be approached more like research into social and physical cognition. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  11. Innovation for creating a smart future

    Directory of Open Access Journals (Sweden)

    Sang M. Lee

    2018-01-01

    Full Text Available Today, we live in a dynamic and turbulent global community. The wave of mega-trends, including rapid change in globalization and technological advances, is creating new market forces. For any organization to survive and prosper in such an environment, innovation is imperative. However, innovation is no longer just for creating value to benefit individuals, organizations, or societies. The ultimate purpose of innovation should be much more far reaching, helping create a smart future where people can enjoy the best quality of life possible. Thus, innovation must search for intelligent solutions to tackle major social ills, seek more proactive approaches to predict the uncertain future, and pursue strategies to remove barriers to the smart future. This study explores the detailed requirements of a smart future, including both hardware types and soft social/cultural components.

  12. Radiation Environment at LEO in the frame of Space Monitoring Data Center at Moscow State University - recent, current and future missions

    Science.gov (United States)

    Myagkova, Irina; Kalegaev, Vladimir; Panasyuk, Mikhail; Svertilov, Sergey; Bogomolov, Vitaly; Bogomolov, Andrey; Barinova, Vera; Barinov, Oleg; Bobrovnikov, Sergey; Dolenko, Sergey; Mukhametdinova, Ludmila; Shiroky, Vladimir; Shugay, Julia

    2016-04-01

    Radiation Environment of Near-Earth space is one of the most important factors of space weather. Space Monitoring Data Center of Moscow State University provides operational control of radiation conditions at Low Earth's Orbits (LEO) of the near-Earth space using data of recent (Vernov, CORONAS series), current (Meteor-M, Electro-L series) and future (Lomonosov) space missions. Internet portal of Space Monitoring Data Center of Skobeltsyn Institute of Nuclear Physics of Lomonosov Moscow State University (SINP MSU) http://swx.sinp.msu.ru/ provides possibilities to control and analyze the space radiation conditions in the real time mode together with the geomagnetic and solar activity including hard X-ray and gamma- emission of solar flares. Operational data obtained from space missions at L1, GEO and LEO and from the Earth's magnetic stations are used to represent radiation and geomagnetic state of near-Earth environment. The models of space environment that use space measurements from different orbits were created. Interactive analysis and operational neural network forecast services are based on these models. These systems can automatically generate alerts on particle fluxes enhancements above the threshold values, both for SEP and relativistic electrons of outer Earth's radiation belt using data from GEO and LEO as input. As an example of LEO data we consider data from Vernov mission, which was launched into solar-synchronous orbit (altitude 640 - 83 0 km, inclination 98.4°, orbital period about 100 min) on July 8, 2014 and began to receive scientific information since July 20, 2014. Vernov mission have provided studies of the Earth's radiation belt relativistic electron precipitation and its possible connection with atmosphere transient luminous events, as well as the solar hard X-ray and gamma-emission measurements. Radiation and electromagnetic environment monitoring in the near-Earth Space, which is very important for space weather study, was also realised

  13. High tolerance to temperature and salinity change should enable scleractinian coral Platygyra acuta from marginal environments to persist under future climate change.

    Directory of Open Access Journals (Sweden)

    Apple Pui Yi Chui

    Full Text Available With projected changes in the marine environment under global climate change, the effects of single stressors on corals have been relatively well studied. However, more focus should be placed on the interactive effects of multiple stressors if their impacts upon corals are to be assessed more realistically. Elevation of sea surface temperature is projected under global climate change, and future increases in precipitation extremes related to the monsoon are also expected. Thus, the lowering of salinity could become a more common phenomenon and its impact on corals could be significant as extreme precipitation usually occurs during the coral spawning season. Here, we investigated the interactive effects of temperature [24, 27 (ambient, 30, 32°C] and salinity [33 psu (ambient, 30, 26, 22, 18, 14 psu] on larval settlement, post-settlement survival and early growth of the dominant coral Platygyra acuta from Hong Kong, a marginal environment for coral growth. The results indicate that elevated temperatures (+3°C and +5°C above ambient did not have any significant effects on larval settlement success and post-settlement survival for up to 56 days of prolonged exposure. Such thermal tolerance was markedly higher than that reported in the literature for other coral species. Moreover, there was a positive effect of these elevated temperatures in reducing the negative effects of lowered salinity (26 psu on settlement success. The enhanced settlement success brought about by elevated temperatures, together with the high post-settlement survival recorded up to 44 and 8 days of exposure under +3°C and +5°C ambient respectively, resulted in the overall positive effects of elevated temperatures on recruitment success. These results suggest that projected elevation in temperature over the next century should not pose any major problem for the recruitment success of P. acuta. The combined effects of higher temperatures and lowered salinity (26 psu could

  14. Future Vision for Instrumentation, Information, and Control Modernization

    International Nuclear Information System (INIS)

    Thomas, Ken D.

    2012-01-01

    A Future Vision of a transformed nuclear plant operating model based on an integrated digital environment has been developed as part of the Advanced Instrumentation, Information, and Control (II and C) research pathway, under the Light Water Reactor (LWR) Sustainability Program. This is a research and development program sponsored by the U.S. Department of Energy (DOE), performed in close collaboration with the nuclear utility industry, to provide the technical foundations for licensing and managing the long-term, safe and economical operation of current nuclear power plants. II and C has been identified as a potential life-limiting issue for the domestic LWR fleet in addressing the reliability and aging concerns of the legacy systems in service today. The Future Vision is based on a digital architecture that encompasses all aspects of plant operations and support, integrating plant systems, plant work processes, and plant workers in a seamless digital environment to enhance nuclear safety, increase productivity, and improve overall plant performance. Pilot projects are being conducted as the means for industry to gain confidence in these new technologies for use in nuclear plant work activities. The pilot projects introduce new digital technologies into the nuclear plant operating environment at host operating plants to demonstrate and validate them for production usage. In turn, the pilot project technologies serve as the stepping stones to the eventual seamless digital environment as described in the Future Vision. Initial project results confirm that the technologies can provide substantial efficiency and human performance benefits while resolving the reliability and aging concerns of the legacy systems. (author)

  15. Preliminary design of CERN Future Circular Collider tunnel: first evaluation of the radiation environment in critical areas for electronics

    Science.gov (United States)

    Infantino, Angelo; Alía, Rubén García; Besana, Maria Ilaria; Brugger, Markus; Cerutti, Francesco

    2017-09-01

    As part of its post-LHC high energy physics program, CERN is conducting a study for a new proton-proton collider, called Future Circular Collider (FCC-hh), running at center-of-mass energies of up to 100 TeV in a new 100 km tunnel. The study includes a 90-350 GeV lepton collider (FCC-ee) as well as a lepton-hadron option (FCC-he). In this work, FLUKA Monte Carlo simulation was extensively used to perform a first evaluation of the radiation environment in critical areas for electronics in the FCC-hh tunnel. The model of the tunnel was created based on the original civil engineering studies already performed and further integrated in the existing FLUKA models of the beam line. The radiation levels in critical areas, such as the racks for electronics and cables, power converters, service areas, local tunnel extensions was evaluated.

  16. Preliminary design of CERN Future Circular Collider tunnel: first evaluation of the radiation environment in critical areas for electronics

    Directory of Open Access Journals (Sweden)

    Infantino Angelo

    2017-01-01

    Full Text Available As part of its post-LHC high energy physics program, CERN is conducting a study for a new proton-proton collider, called Future Circular Collider (FCC-hh, running at center-of-mass energies of up to 100 TeV in a new 100 km tunnel. The study includes a 90-350 GeV lepton collider (FCC-ee as well as a lepton-hadron option (FCC-he. In this work, FLUKA Monte Carlo simulation was extensively used to perform a first evaluation of the radiation environment in critical areas for electronics in the FCC-hh tunnel. The model of the tunnel was created based on the original civil engineering studies already performed and further integrated in the existing FLUKA models of the beam line. The radiation levels in critical areas, such as the racks for electronics and cables, power converters, service areas, local tunnel extensions was evaluated.

  17. Mobile wireless network for the urban environment

    Science.gov (United States)

    Budulas, Peter; Luu, Brian; Gopaul, Richard

    2005-05-01

    As the Army transforms into the Future Force, particular attention must be paid to operations in Complex and Urban Terrain. Our adversaries increasingly draw us into operations in the urban environment and one can presume that this trend will continue in future battles. In order to ensure that the United States Army maintains battlefield dominance, the Army Research Laboratory (ARL) is developing technology to equip our soldiers for the urban operations of the future. Sophisticated soldier borne systems will extend sensing to the individual soldier, and correspondingly, allow the soldier to establish an accurate picture of their surrounding environment utilizing information from local and remote assets. Robotic platforms will be an integral part of the future combat team. These platforms will augment the team with remote sensing modalities, task execution capabilities, and enhanced communication systems. To effectively utilize the products provided by each of these systems, collected data must be exchanged in real time to all affected entities. Therefore, the Army Research Laboratory is also developing the technology that will be required to support high bandwidth mobile communication in urban environments. This technology incorporates robotic systems that will allow connectivity in areas unreachable by traditional systems. This paper will address some of the issues of providing wireless connectivity in complex and urban terrain. It will further discuss approaches developed by the Army Research Laboratory to integrate communications capabilities into soldier and robotic systems and provide seamless connectivity between the elements of a combat team, and higher echelons.

  18. Deciding for Future Selves Reduces Loss Aversion

    Directory of Open Access Journals (Sweden)

    Qiqi Cheng

    2017-09-01

    Full Text Available In this paper, we present an incentivized experiment to investigate the degree of loss aversion when people make decisions for their current selves and future selves under risk. We find that when participants make decisions for their future selves, they are less loss averse compared to when they make decisions for their current selves. This finding is consistent with the interpretation of loss aversion as a bias in decision-making driven by emotions, which are reduced when making decisions for future selves. Our findings endorsed the external validity of previous studies on the impact of emotion on loss aversion in a real world decision-making environment.

  19. Deciding for Future Selves Reduces Loss Aversion.

    Science.gov (United States)

    Cheng, Qiqi; He, Guibing

    2017-01-01

    In this paper, we present an incentivized experiment to investigate the degree of loss aversion when people make decisions for their current selves and future selves under risk. We find that when participants make decisions for their future selves, they are less loss averse compared to when they make decisions for their current selves. This finding is consistent with the interpretation of loss aversion as a bias in decision-making driven by emotions, which are reduced when making decisions for future selves. Our findings endorsed the external validity of previous studies on the impact of emotion on loss aversion in a real world decision-making environment.

  20. NREL Research Earns Two Prestigious R&D 100 Awards | News | NREL

    Science.gov (United States)

    and development not only create jobs in America but help advance the goal of a clean energy future and Steve Johnston. High Performance Supercomputing Platform Uses Warm Water to Prevent Heat Build-up initiative were NREL's Steve Hammond and Nicolas Dube of HP. "Oscars" of Innovation Winners of the

  1. New generation of docking programs: Supercomputer validation of force fields and quantum-chemical methods for docking.

    Science.gov (United States)

    Sulimov, Alexey V; Kutov, Danil C; Katkova, Ekaterina V; Ilin, Ivan S; Sulimov, Vladimir B

    2017-11-01

    Discovery of new inhibitors of the protein associated with a given disease is the initial and most important stage of the whole process of the rational development of new pharmaceutical substances. New inhibitors block the active site of the target protein and the disease is cured. Computer-aided molecular modeling can considerably increase effectiveness of new inhibitors development. Reliable predictions of the target protein inhibition by a small molecule, ligand, is defined by the accuracy of docking programs. Such programs position a ligand in the target protein and estimate the protein-ligand binding energy. Positioning accuracy of modern docking programs is satisfactory. However, the accuracy of binding energy calculations is too low to predict good inhibitors. For effective application of docking programs to new inhibitors development the accuracy of binding energy calculations should be higher than 1kcal/mol. Reasons of limited accuracy of modern docking programs are discussed. One of the most important aspects limiting this accuracy is imperfection of protein-ligand energy calculations. Results of supercomputer validation of several force fields and quantum-chemical methods for docking are presented. The validation was performed by quasi-docking as follows. First, the low energy minima spectra of 16 protein-ligand complexes were found by exhaustive minima search in the MMFF94 force field. Second, energies of the lowest 8192 minima are recalculated with CHARMM force field and PM6-D3H4X and PM7 quantum-chemical methods for each complex. The analysis of minima energies reveals the docking positioning accuracies of the PM7 and PM6-D3H4X quantum-chemical methods and the CHARMM force field are close to one another and they are better than the positioning accuracy of the MMFF94 force field. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Suitability of Agent Technology for Military Command and Control in the Future Combat System Environment

    Energy Technology Data Exchange (ETDEWEB)

    Potok, TE

    2003-02-13

    The U.S. Army is faced with the challenge of dramatically improving its war fighting capability through advanced technologies. Any new technology must provide significant improvement over existing technologies, yet be reliable enough to provide a fielded system. The focus of this paper is to assess the novelty and maturity of agent technology for use in the Future Combat System (FCS). The FCS concept represents the U.S. Army's ''mounted'' form of the Objective Force. This concept of vehicles, communications, and weaponry is viewed as a ''system of systems'' which includes net-centric command and control (C{sup 2}) capabilities. This networked C{sup 2} is an important transformation from the historically centralized, or platform-based, C{sup 2} function since a centralized command architecture may become a decision-making and execution bottleneck, particularly as the pace of war accelerates. A mechanism to ensure an effective network-centric C{sup 2} capacity (combining intelligence gathering and analysis available at lower levels in the military hierarchy) is needed. Achieving a networked C{sup 2} capability will require breakthroughs in current software technology. Many have proposed the use of agent technology as a potential solution. Agents are an emerging technology, and it is not yet clear whether it is suitable for addressing the networked C{sup 2} challenge, particularly in satisfying battlespace scalability, mobility, and security expectations. We have developed a set of software requirements for FCS based on military requirements for this system. We have then evaluated these software requirements against current computer science technology. This analysis provides a set of limitations in the current technology when applied to the FCS challenge. Agent technology is compared against this set of limitations to provide a means of assessing the novelty of agent technology in an FCS environment. From this analysis we

  3. Energy and environment: a challenge for materials

    International Nuclear Information System (INIS)

    Marchand, Ch.; Walle, E.; Hody, St.; Alleau, Th.; Bassat, J.M.; Pourcelly, G.; Aitelli, P.; Crepy, Ch. de; Le Douaron, A.; Moussy, F.; Guibert, A. de; Mogensen, P.C.; Beauvy, M.

    2005-01-01

    The ESIREM (Ecole Superieure d'Ingenieurs de Recherche en Electronique et en Materiaux) has organized its yearly colloquium in Dijon on the 20. of January 2005. The topic was 'energy and environment: a challenge for materials'. Here are presented the summaries of the speeches of Mr C. Marchand: how to conciliate increasing needs in energy, limited resources in hydrocarbons and to control the releases of greenhouse gases: a main challenge for the 21. century; of Mr E. Walle: materials for the future nuclear systems; of Mr S. Hody: which future prospect for the energy production: the point of view of Gaz de France; of Mr T. Alleau: the hydrogen, the energy of the future; of Mr J.M. Bassat: the specificities of the SOFC, new materials for a carrying out at ambient temperature; of Mr G. Pourcelly: the PEMFC; of Mrs A. Le Douaron and F. Moussy: materials, energy and environment in automotive industry; of Ms A. de Guibert: the key role of materials in the lithium-ion accumulators; of Mr P. C. Mogensen: the photovoltaic materials: the key of the solar energy; and of Mr M. Beauvy: the future reactors: challenges for materials. (O.M.)

  4. Human–environment interactions in urban green spaces — A systematic review of contemporary issues and prospects for future research

    Energy Technology Data Exchange (ETDEWEB)

    Kabisch, Nadja, E-mail: nadja.kabisch@geo.hu-berlin.de [Institute of Geography, Humboldt-University Berlin, Unter den Linden 6, 10099 Berlin (Germany); Department of Urban and Environmental Sociology, Helmholtz Centre for Environmental Research — UFZ, 04318 Leipzig (Germany); Qureshi, Salman [Institute of Geography, Humboldt-University Berlin, Unter den Linden 6, 10099 Berlin (Germany); School of Architecture, Birmingham Institute of Art and Design, Birmingham City University, The Parkside Building, 5 Cardigan Street, Birmingham B4 7BD (United Kingdom); Haase, Dagmar [Institute of Geography, Humboldt-University Berlin, Unter den Linden 6, 10099 Berlin (Germany); Department of Computational Landscape Ecology, Helmholtz Centre for Environmental Research — UFZ, 04318 Leipzig (Germany)

    2015-01-15

    Scientific papers on landscape planning underline the importance of maintaining and developing green spaces because of their multiple environmental and social benefits for city residents. However, a general understanding of contemporary human–environment interaction issues in urban green space is still incomplete and lacks orientation for urban planners. This review examines 219 publications to (1) provide an overview of the current state of research on the relationship between humans and urban green space, (2) group the different research approaches by identifying the main research areas, methods, and target groups, and (3) highlight important future prospects in urban green space research. - Highlights: • Reviewed literature on urban green pins down a dearth of comparative studies. • Case studies in Africa and Russia are marginalized – the Europe and US dominate. • Questionnaires are used as major tool followed by GIS and quantitative approaches. • Developing countries should contribute in building an urban green space agenda. • Interdisciplinary, adaptable and pluralistic approaches can satiate a knowledge gap.

  5. Human–environment interactions in urban green spaces — A systematic review of contemporary issues and prospects for future research

    International Nuclear Information System (INIS)

    Kabisch, Nadja; Qureshi, Salman; Haase, Dagmar

    2015-01-01

    Scientific papers on landscape planning underline the importance of maintaining and developing green spaces because of their multiple environmental and social benefits for city residents. However, a general understanding of contemporary human–environment interaction issues in urban green space is still incomplete and lacks orientation for urban planners. This review examines 219 publications to (1) provide an overview of the current state of research on the relationship between humans and urban green space, (2) group the different research approaches by identifying the main research areas, methods, and target groups, and (3) highlight important future prospects in urban green space research. - Highlights: • Reviewed literature on urban green pins down a dearth of comparative studies. • Case studies in Africa and Russia are marginalized – the Europe and US dominate. • Questionnaires are used as major tool followed by GIS and quantitative approaches. • Developing countries should contribute in building an urban green space agenda. • Interdisciplinary, adaptable and pluralistic approaches can satiate a knowledge gap

  6. Future Leaders' Views on Organizational Culture

    Science.gov (United States)

    Maloney, Krisellen; Antelman, Kristin; Arlitsch, Kenning; Butler, John

    2010-01-01

    Research libraries will continue to be affected by rapid and transformative changes in information technology and the networked environment for the foreseeable future. The pace and direction of these changes will profoundly challenge libraries and their staffs to respond effectively. This paper presents the results of a survey that was designed to…

  7. Future nuclear power generation

    International Nuclear Information System (INIS)

    Mosbah, D.S.; Nasreddine, M.

    2006-01-01

    The book includes an introduction then it speaks about the options to secure sources of energy, nuclear power option, nuclear plants to generate energy including light-water reactors (LWR), heavy-water reactors (HWR), advanced gas-cooled reactors (AGR), fast breeder reactors (FBR), development in the manufacture of reactors, fuel, uranium in the world, current status of nuclear power generation, economics of nuclear power, nuclear power and the environment and nuclear power in the Arab world. A conclusion at the end of the book suggests the increasing demand for energy in the industrialized countries and in a number of countries that enjoy special and economic growth such as China and India pushes the world to search for different energy sources to insure the urgent need for current and anticipated demand in the near and long-term future in light of pessimistic and optimistic outlook for energy in the future. This means that states do a scientific and objective analysis of the currently available data for the springboard to future plans to secure the energy required to support economy and welfare insurance.

  8. The Canadian oil sands--a sticky future

    Energy Technology Data Exchange (ETDEWEB)

    Cowtan, S A

    1977-01-01

    The oil sands have been known for 200 yr but only over the last decade have they been recognized as a potential major energy source for Canada. The study looks at the present GCOS plant, and briefly discusses Canada's future energy requirements and how she might fill those requirements from conventional and nonconventional sources, such as the Frontier areas, oil sands mining, oil sands in situ, and heavy oil. The economics and the future of these sources and the environment necessary for their development are analyzed.

  9. The future of energy use

    Energy Technology Data Exchange (ETDEWEB)

    Hill, R.; O' Keefe, P.; Snape, C.

    1994-12-15

    An analysis of the use of different forms of energy and its environmental and social impacts. Giving an overview of the development of different forms of energy provision and patterns of supply and demand, this book shows how enduse applies to energy industries, how the environment and social costs of energy use have to be introduced into energy planning and accounting and the crucial role of efficiency. Case studies will include the transport and building sectors of industrial economies, the use of stoves and woodfuel and agroforestry planning in developing countries. It will then examine the different forms of energy - conventional, nuclear and renewable - concluding by setting out different energy futures and the policy requirements for sustainable futures. (author)

  10. A worldwide perspective on energy, environment and sustainable development

    International Nuclear Information System (INIS)

    Dincer, Ibrahim; Rosen, Marc A.

    1998-01-01

    Problems with energy supply and use are related not only to global warming, but also to such environmental concerns as air pollution, ozone depletion forest destruction and emission of radioactive substances. These issues must be taken into consideration simultaneously if humanity is to achieve a bright energy future with minimal environmental impacts. Much evidence exists which suggests that the future will be negatively impacted if humans keep degrading the environment. There is an intimate connection between energy, the environment and sustainable development. A society seeking sustainable development ideally must utilise only energy resources which cause no environmental impact (e.g. which release no emissions to the environment). However, since all energy resources lead to some environmental impact, it is reasonable to suggest that some (not all) of the concerns regarding the limitations imposed on sustainable development by environmental emissions and their negative impacts can be part overcome through increased energy efficiency. A strong relation clearly exists between energy efficiency and environmental impact since, for the same services or products, less resource utilisation and pollution is normally associated with higher efficiency processes. Anticipated patterns of future energy use and consequent environmental impact (Focusing on acid precipitation, stratospheric ozone depletion and the greenhouse effect) are comprehensively discussed in this paper. Also, some solutions to current environmental issues in terms of energy conservation and renewable energy technologies are identified and some theoretical and practical limitations on increased energy efficiency are explained. The relations between energy and sustainable development, and between the environment and sustainable development, are described, and in illustrative example is presented. Throughout the paper several issues relating to energy, environment and sustainable development are examined

  11. Understanding Lustre Internals

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Feiyi [ORNL; Oral, H Sarp [ORNL; Shipman, Galen M [ORNL; Drokin, Oleg [ORNL; Wang, Di [ORNL; Huang, He [ORNL

    2009-04-01

    Lustre was initiated and funded, almost a decade ago, by the U.S. Department of Energy (DoE) Office of Science and National Nuclear Security Administration laboratories to address the need for an open source, highly-scalable, high-performance parallel filesystem on by then present and future supercomputing platforms. Throughout the last decade, it was deployed over numerous medium-to-large-scale supercomputing platforms and clusters, and it performed and met the expectations of the Lustre user community. As it stands at the time of writing this document, according to the Top500 list, 15 of the top 30 supercomputers in the world use Lustre filesystem. This report aims to present a streamlined overview on how Lustre works internally at reasonable details including relevant data structures, APIs, protocols and algorithms involved for Lustre version 1.6 source code base. More importantly, it tries to explain how various components interconnect with each other and function as a system. Portions of this report are based on discussions with Oak Ridge National Laboratory Lustre Center of Excellence team members and portions of it are based on our own understanding of how the code works. We, as the authors team bare all responsibilities for all errors and omissions in this document. We can only hope it helps current and future Lustre users and Lustre code developers as much as it helped us understanding the Lustre source code and its internal workings.

  12. Human Computing in the Life Sciences: What does the future hold?

    NARCIS (Netherlands)

    Fikkert, F.W.

    2007-01-01

    In future computing environments you will be surrounded and supported by all kinds of technologies. Characteristic is that you can interact with them in a natural way: you can speak to, point at, or even frown about some piece of presented information: the environment understands your intent.

  13. Population vs. the environment.

    Science.gov (United States)

    1992-03-01

    In anticipation of UN Conference on Environment and Development scheduled for June in Brazil, the Japan Broadcasting Corporation (NHK) recently televised a hard-hitting documentary focusing on the impact of rapid population growth on resources and the environment. Entitled "Population Explosion and the Looming Crisis: Can Humankind Determine a Better Future?" the documentary aired on January 5, featuring interviews with experts from the population field such as Dr. Nafis Sadik of the UNFPA and Dr. Paul Ehrlich of Stanford University. The program, made with the cooperation of UNFPA and JOICFP, compared the current global demograhic and environmental situation with the one expected to exist in 2025, when the world population is expected to reach 10 billion. The documentary depicted a future fraught with food shortages, depleted energe resources, refugees, and a devastated environment. In order to illustrate the effect of population growth in developing countries, the documentary featured reports from countries in Asia and Africa. And to show the heavy burden that industrialized countries place on the global environment, the documentary examined Japan's own pattern of consumption and waste. As the UNFPA's Sadik pointed out, the luxurious lifestyle of developed countries comes at the expense of the developing world. Stressing that everyone in the world should be able to enjoy a reasonable standard of living. Sadik called for "sustainable patterns of development," which can be achieved through the following: improved technology, reduced consumption patterns, and changed lifestyles. A critical element in changing lifestyles includes reducing global fertility to 3.2 children/woman by the year 2000. Otherwise, a world population will not double but triple by the year 2025.

  14. [High energy particle physics]: Progress report covering the five year period from August 1, 1984 to May 31, 1989 with special emphasis for the period of August 1, 1988 to May 31, 1989: Part 1

    International Nuclear Information System (INIS)

    1989-01-01

    In this document the High Energy Physics group reviews its accomplishments and progress during the past five years, with special emphasis for the past year and presents plans for continuing research during the next several years. During the last few years the effort of the experimental group has been divided approximately equally between fixed target physics and preparations for future collider experiments. The main emphasis of the theory group has been in the area of strong and electroweak phenomenology with an emphasis on hard scattering processes. With the recent creation of the Supercomputer Computations Research Institute, some work has also been done in the area of numerical simulations of condensed matter spin models and techniques for implementing numerical simulations on supercomputers

  15. Ethics and answers engineering efficiency for a sustainable future

    Energy Technology Data Exchange (ETDEWEB)

    Hamilton, J.M.

    2000-07-01

    Speech on some perspectives in the USA including a few facts on energy usage; value of ethics in energy conservation; challenges for ITT to develop leadership and concepts of partnership and benefits to do everything in our power to create a sustainable environment and secure the future for generations to come. Therefore it is a good business to save energy and protect the environment.(GL)

  16. OCSEGen: Open Components and Systems Environment Generator

    Science.gov (United States)

    Tkachuk, Oksana

    2014-01-01

    To analyze a large system, one often needs to break it into smaller components.To analyze a component or unit under analysis, one needs to model its context of execution, called environment, which represents the components with which the unit interacts. Environment generation is a challenging problem, because the environment needs to be general enough to uncover unit errors, yet precise enough to make the analysis tractable. In this paper, we present a tool for automated environment generation for open components and systems. The tool, called OCSEGen, is implemented on top of the Soot framework. We present the tool's current support and discuss its possible future extensions.

  17. Update on the Worsening Particle Radiation Environment Observed by CRaTER and Implications for Future Human Deep-Space Exploration

    Science.gov (United States)

    Schwadron, N. A.; Rahmanifard, F.; Wilson, J.; Jordan, A. P.; Spence, H. E.; Joyce, C. J.; Blake, J. B.; Case, A. W.; de Wet, W.; Farrell, W. M.; Kasper, J. C.; Looper, M. D.; Lugaz, N.; Mays, L.; Mazur, J. E.; Niehof, J.; Petro, N.; Smith, C. W.; Townsend, L. W.; Winslow, R.; Zeitlin, C.

    2018-03-01

    Over the last decade, the solar wind has exhibited low densities and magnetic field strengths, representing anomalous states that have never been observed during the space age. As discussed by Schwadron, Blake, et al. (2014, https://doi.org/10.1002/2014SW001084), the cycle 23-24 solar activity led to the longest solar minimum in more than 80 years and continued into the "mini" solar maximum of cycle 24. During this weak activity, we observed galactic cosmic ray fluxes that exceeded theERobserved small solar energetic particle events. Here we provide an update to the Schwadron, Blake, et al. (2014, https://doi.org/10.1002/2014SW001084) observations from the Cosmic Ray Telescope for the Effects of Radiation (CRaTER) on the Lunar Reconnaissance Orbiter. The Schwadron, Blake, et al. (2014, https://doi.org/10.1002/2014SW001084) study examined the evolution of the interplanetary magnetic field and utilized a previously published study by Goelzer et al. (2013, https://doi.org/10.1002/2013JA019404) projecting out the interplanetary magnetic field strength based on the evolution of sunspots as a proxy for the rate that the Sun releases coronal mass ejections. This led to a projection of dose rates from galactic cosmic rays on the lunar surface, which suggested a ˜20% increase of dose rates from one solar minimum to the next and indicated that the radiation environment in space may be a worsening factor important for consideration in future planning of human space exploration. We compare the predictions of Schwadron, Blake, et al. (2014, https://doi.org/10.1002/2014SW001084) with the actual dose rates observed by CRaTER in the last 4 years. The observed dose rates exceed the predictions by ˜10%, showing that the radiation environment is worsening more rapidly than previously estimated. Much of this increase is attributable to relatively low-energy ions, which can be effectively shielded. Despite the continued paucity of solar activity, one of the hardest solar events in

  18. The computational future for climate change research

    International Nuclear Information System (INIS)

    Washington, Warren M

    2005-01-01

    The development of climate models has a long history starting with the building of atmospheric models and later ocean models. The early researchers were very aware of the goal of building climate models which could integrate our knowledge of complex physical interactions between atmospheric, land-vegetation, hydrology, ocean, cryospheric processes, and sea ice. The transition from climate models to earth system models is already underway with coupling of active biochemical cycles. Progress is limited by present computer capability which is needed for increasingly more complex and higher resolution climate models versions. It would be a mistake to make models too complex or too high resolution. Arriving at a 'feasible' and useful model is the challenge for the climate model community. Some of the climate change history, scientific successes, and difficulties encountered with supercomputers will be presented

  19. A Marketing Approach to Commodity Futures Exchanges : A Case Study of the Dutch Hog Industry

    NARCIS (Netherlands)

    Meulenberg, M.T.G.; Pennings, J.M.E.

    2002-01-01

    This paper proposes a marketing strategic approach to commodity futures exchanges to optimise the (hedging) services offered. First, the environment of commodity futures exchanges is examined. Second, the threats and opportunities of commodity futures exchanges are analysed. Our analysis

  20. Open environments to support systems engineering tool integration: A study using the Portable Common Tool Environment (PCTE)

    Science.gov (United States)

    Eckhardt, Dave E., Jr.; Jipping, Michael J.; Wild, Chris J.; Zeil, Steven J.; Roberts, Cathy C.

    1993-01-01

    A study of computer engineering tool integration using the Portable Common Tool Environment (PCTE) Public Interface Standard is presented. Over a 10-week time frame, three existing software products were encapsulated to work in the Emeraude environment, an implementation of the PCTE version 1.5 standard. The software products used were a computer-aided software engineering (CASE) design tool, a software reuse tool, and a computer architecture design and analysis tool. The tool set was then demonstrated to work in a coordinated design process in the Emeraude environment. The project and the features of PCTE used are described, experience with the use of Emeraude environment over the project time frame is summarized, and several related areas for future research are summarized.

  1. The SINQ data acquisition environment

    Energy Technology Data Exchange (ETDEWEB)

    Maden, D [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1996-11-01

    The data acquisition environment for the neutron scattering instruments supported by LNS at SINQ is described. The intention is to provide future users with the necessary background to the computing facilities on site rather than to present a user manual for the on-line system. (author) 5 figs., 6 refs.

  2. The SINQ data acquisition environment

    International Nuclear Information System (INIS)

    Maden, D.

    1996-01-01

    The data acquisition environment for the neutron scattering instruments supported by LNS at SINQ is described. The intention is to provide future users with the necessary background to the computing facilities on site rather than to present a user manual for the on-line system. (author) 5 figs., 6 refs

  3. Energy. Supermaterial for solar cells, membranes against the global warming, energy conservation in the greenhouse; Energie. Supermaterial fuer Solarzellen, Membranen gegen die globale Erwaermung, Energiesparen im Treibhaus

    Energy Technology Data Exchange (ETDEWEB)

    Roegener, Wiebke; Frick, Frank; Tillemans, Axel; Stahl-Busse, Brigitte

    2010-07-01

    A kaleidoscope of pictures presents highlights from the research at the Forschungszentrum Juelich - from moving into a new computer era over the development of a detector for dangerous liquids up to a new method of treatment against tinnitus. The highlights of this brochure are: (a) An interview with he director of the Oak Ridge National Laboratory on the energy mix of the future; (b) Environment friendly power generation by means of fuel cells; (c) Transfer of knowledge from fusion experiments to greater plants using a supercomputer; (d) Development of powerful batteries for electrically powered cars by means of the know-how from fuel cell research; (e) Investigation of contacting used fuel elements with water; (f) Reduction if energy consumption in a greenhouse using a combination of glass and foils; (g) News on the energy research and environmental research.

  4. Lisbon: Supercomputer for Portugal financed from 'CERN Fund'

    International Nuclear Information System (INIS)

    Anon.

    1990-01-01

    A powerful new computer is now in use at the Portuguese National Foundation for Scientific Computation (FCCN Lisbon), set up in 1987 to help fund university computing, to anticipate future requirements and to provide a fast computer at the National Civil Engineering Laboratory (LNEC) as a central node for remote access by major research institutes

  5. FFTF-cycle 10 program and future plan

    Science.gov (United States)

    Kohyama, Akira

    1988-04-01

    Brief outlines are provided of the FFTF cycle 10 program and future plans in consideration. The primary objective of the Japan-US collaboration program is to enable predictions of material behavior in MFRs to be made from data obtained in other irradiation environments. Major program goals are outlined.

  6. Bio energy: Bio energy in the Energy System of the Future

    International Nuclear Information System (INIS)

    Finden, Per; Soerensen, Heidi; Wilhelmsen, Gunnar

    2001-01-01

    This is Chapter 7, the final chapter, of the book ''Bio energy - Environment, technique and market''. Its main sections are: (1) Factors leading to changes in the energy systems, (2) The energy systems of the future, globally, (3) The future energy system in Norway and (4) Norwegian energy policy at the crossroads

  7. The future of water quality and the regulatory environment for the oil sands and coalbed methane development

    International Nuclear Information System (INIS)

    Kasperski, K.; Mikula, R.

    2004-01-01

    The use of consolidated tailings in recent years for the surface mined oil sands bitumen extraction process has resulted in major improvements in water consumption because materials are transported more efficiently in a slurry form. Water storage requirements will be reduced as the cost of handling tailings in the conventional manner becomes clearer. Future improvements may be in the form of mine face sand rejection, more advanced tailings treatment, or the use of clays for continuous reclamation. Sand filtering or stacking technologies can improve tailings properties and reduce the amount of water needed per unit of bitumen. It was noted that although the technologies will minimize land disturbance and fresh water consumption, water chemistries will be driven to the point where extraction recovery is impaired and water treatment will be required. The volumes and quality of water that is pumped out to produce coalbed methane (CBM) was also discussed with reference to the origin of water in coal beds, water resource depletion, water disposal, direct land applications, and surface evaporation. The Alberta Energy and Utilities Board and Alberta Environment are responsible for regulating CBM water issues in the province, including water disposal from CBM production. 41 refs., 6 tabs., 8 figs

  8. PC as physics computer for LHC?

    International Nuclear Information System (INIS)

    Jarp, Sverre; Simmins, Antony; Tang, Hong

    1996-01-01

    In the last five years, we have seen RISC workstations take over the computing scene that was once controlled by mainframes and supercomputers. In this paper we will argue that the same phenomenon might happen again. A project, active since March this year in the Physics Data Processing group of CERN's CN division is described where ordinary desktop PCs running Windows (NT and 3.11) have been used for creating an environment for running large LHC batch jobs (initially the DICE simulation job of Atlas). The problems encountered in porting both the CERN library and the specific Atlas codes are described together with some encouraging benchmark results when comparing to existing to existing RISC workstation in use by the Atlas collaboration. The issues of establishing the batch environment (Batch monitor, staging software, etc) are also covered. Finally a quick extrapolation of commodity computing power available in the future is touched upon to indicate what kind of cost envelope could be sufficient for the simulation farms required by the LHC experiments. (author)

  9. Pc as Physics Computer for Lhc ?

    Science.gov (United States)

    Jarp, Sverre; Simmins, Antony; Tang, Hong; Yaari, R.

    In the last five years, we have seen RISC workstations take over the computing scene that was once controlled by mainframes and supercomputers. In this paper we will argue that the same phenomenon might happen again. A project, active since March this year in the Physics Data Processing group, of CERN's CN division is described where ordinary desktop PCs running Windows (NT and 3.11) have been used for creating an environment for running large LHC batch jobs (initially the DICE simulation job of Atlas). The problems encountered in porting both the CERN library and the specific Atlas codes are described together with some encouraging benchmark results when comparing to existing RISC workstations in use by the Atlas collaboration. The issues of establishing the batch environment (Batch monitor, staging software, etc.) are also covered. Finally a quick extrapolation of commodity computing power available in the future is touched upon to indicate what kind of cost envelope could be sufficient for the simulation farms required by the LHC experiments.

  10. End-user programming of ambient narratives for smart retail environments

    NARCIS (Netherlands)

    Doorn, van M.G.L.M.

    2009-01-01

    Ambient Intelligence is a vision on the future of the consumer electronics, telecommunications and computer industry that refers to electronic environments that respond to the presence and activity of people and objects. The goal of these intelligent environments is to support the performance of our

  11. The Centre of High-Performance Scientific Computing, Geoverbund, ABC/J - Geosciences enabled by HPSC

    Science.gov (United States)

    Kollet, Stefan; Görgen, Klaus; Vereecken, Harry; Gasper, Fabian; Hendricks-Franssen, Harrie-Jan; Keune, Jessica; Kulkarni, Ketan; Kurtz, Wolfgang; Sharples, Wendy; Shrestha, Prabhakar; Simmer, Clemens; Sulis, Mauro; Vanderborght, Jan

    2016-04-01

    The Centre of High-Performance Scientific Computing (HPSC TerrSys) was founded 2011 to establish a centre of competence in high-performance scientific computing in terrestrial systems and the geosciences enabling fundamental and applied geoscientific research in the Geoverbund ABC/J (geoscientfic research alliance of the Universities of Aachen, Cologne, Bonn and the Research Centre Jülich, Germany). The specific goals of HPSC TerrSys are to achieve relevance at the national and international level in (i) the development and application of HPSC technologies in the geoscientific community; (ii) student education; (iii) HPSC services and support also to the wider geoscientific community; and in (iv) the industry and public sectors via e.g., useful applications and data products. A key feature of HPSC TerrSys is the Simulation Laboratory Terrestrial Systems, which is located at the Jülich Supercomputing Centre (JSC) and provides extensive capabilities with respect to porting, profiling, tuning and performance monitoring of geoscientific software in JSC's supercomputing environment. We will present a summary of success stories of HPSC applications including integrated terrestrial model development, parallel profiling and its application from watersheds to the continent; massively parallel data assimilation using physics-based models and ensemble methods; quasi-operational terrestrial water and energy monitoring; and convection permitting climate simulations over Europe. The success stories stress the need for a formalized education of students in the application of HPSC technologies in future.

  12. A Statistical Evaluation of Atmosphere-Ocean General Circulation Models: Complexity vs. Simplicity

    OpenAIRE

    Robert K. Kaufmann; David I. Stern

    2004-01-01

    The principal tools used to model future climate change are General Circulation Models which are deterministic high resolution bottom-up models of the global atmosphere-ocean system that require large amounts of supercomputer time to generate results. But are these models a cost-effective way of predicting future climate change at the global level? In this paper we use modern econometric techniques to evaluate the statistical adequacy of three general circulation models (GCMs) by testing thre...

  13. THE FISCAL DIMENSION OF THE ENVIRONMENT POLICY

    Directory of Open Access Journals (Sweden)

    Monica SUSANU

    2006-01-01

    Full Text Available Present for the first time on the European order of business at the beginning of the ‘70s, the concern for the environment gains a distinctive nature as the Rome Club signalled the diminishing of the natural resources and the rapid deterioration of the quality of water, air and soil, of climate in general. Starting with 1972 the community environment policy was created and developed as one of the most important common policies. Although it does not match the funding for the regional or the agricultural policies, the environment policy has become important due to the fact that it hasto be approached when conceiving and applying the rest of community policies. The sustainable development strategy, the way it was adopted and (reconfirmed at the international summits in the last two decades (Rio – 1992, Johannesburg – 2002 and the Kyoto protocol, has become the main element of action of the environment policy measures. The preoccupation for nature precedes and accompanies all actions and orientations of social and economicpolicies because it is motivated by the care for the primordial heritage of the future generations: the planet’s health.The environment policy reflects the interest of the entire society in nature and the numerous green movements, environment organizations and political parties, that display a successful raise on the political arena, express the evolution of mentalities and attitudes as well as the degree of accountability of the governorsand the governed towards this vital aspect for the present and the future.

  14. The rise of artificial intelligence and the uncertain future for physicians.

    Science.gov (United States)

    Krittanawong, C

    2018-02-01

    Physicians in everyday clinical practice are under pressure to innovate faster than ever because of the rapid, exponential growth in healthcare data. "Big data" refers to extremely large data sets that cannot be analyzed or interpreted using traditional data processing methods. In fact, big data itself is meaningless, but processing it offers the promise of unlocking novel insights and accelerating breakthroughs in medicine-which in turn has the potential to transform current clinical practice. Physicians can analyze big data, but at present it requires a large amount of time and sophisticated analytic tools such as supercomputers. However, the rise of artificial intelligence (AI) in the era of big data could assist physicians in shortening processing times and improving the quality of patient care in clinical practice. This editorial provides a glimpse at the potential uses of AI technology in clinical practice and considers the possibility of AI replacing physicians, perhaps altogether. Physicians diagnose diseases based on personal medical histories, individual biomarkers, simple scores (e.g., CURB-65, MELD), and their physical examinations of individual patients. In contrast, AI can diagnose diseases based on a complex algorithm using hundreds of biomarkers, imaging results from millions of patients, aggregated published clinical research from PubMed, and thousands of physician's notes from electronic health records (EHRs). While AI could assist physicians in many ways, it is unlikely to replace physicians in the foreseeable future. Let us look at the emerging uses of AI in medicine. Copyright © 2017 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  15. Why me? My responsibility for the future

    International Nuclear Information System (INIS)

    Suedfeld, R.

    1997-01-01

    Everybody assumes responsibilities in his or her daily life, for instance in his or her career, family, or in developing the social and political environment. All these kinds of responsibilities have in common the fact that wrong decisions or actions are likely to have direct consequences for the persons concerned. The question must be asked, however, whether there is a kind of responsibility which goes beyond what was mentioned above. Awareness of problems which are only going to affect coming generations requires action to be taken now in order to allow such problems to be addressed. Only the generation can be responsible for taking such action which has become aware of the problem. The responsibility resulting for workers in the nuclear field is not always met properly. The socio-political environment often gives rise to a mentality in which the individual feels that he or she cannot change anything anyway. Taking care of one's daily chores is a way of meeting one's daily responsbilities, but fails to address the problems of the future. Nuclear engineers, however, have the duty to inject their knowledge and know-how into the competition about the right way into the future. This is part of their responsibility for the future. Acting responsibly in this sense means persistently convincing others. This requires unbiased education and credibility. Merely mouthing opinions is bound to result in a loss of credibility. Everybody has many possibilities to help convince others. This paper is meant to make people accept their responsibility for the future. (orig.) [de

  16. Compilation of Abstracts for SC12 Conference Proceedings

    Science.gov (United States)

    Morello, Gina Francine (Compiler)

    2012-01-01

    1 A Breakthrough in Rotorcraft Prediction Accuracy Using Detached Eddy Simulation; 2 Adjoint-Based Design for Complex Aerospace Configurations; 3 Simulating Hypersonic Turbulent Combustion for Future Aircraft; 4 From a Roar to a Whisper: Making Modern Aircraft Quieter; 5 Modeling of Extended Formation Flight on High-Performance Computers; 6 Supersonic Retropropulsion for Mars Entry; 7 Validating Water Spray Simulation Models for the SLS Launch Environment; 8 Simulating Moving Valves for Space Launch System Liquid Engines; 9 Innovative Simulations for Modeling the SLS Solid Rocket Booster Ignition; 10 Solid Rocket Booster Ignition Overpressure Simulations for the Space Launch System; 11 CFD Simulations to Support the Next Generation of Launch Pads; 12 Modeling and Simulation Support for NASA's Next-Generation Space Launch System; 13 Simulating Planetary Entry Environments for Space Exploration Vehicles; 14 NASA Center for Climate Simulation Highlights; 15 Ultrascale Climate Data Visualization and Analysis; 16 NASA Climate Simulations and Observations for the IPCC and Beyond; 17 Next-Generation Climate Data Services: MERRA Analytics; 18 Recent Advances in High-Resolution Global Atmospheric Modeling; 19 Causes and Consequences of Turbulence in the Earths Protective Shield; 20 NASA Earth Exchange (NEX): A Collaborative Supercomputing Platform; 21 Powering Deep Space Missions: Thermoelectric Properties of Complex Materials; 22 Meeting NASA's High-End Computing Goals Through Innovation; 23 Continuous Enhancements to the Pleiades Supercomputer for Maximum Uptime; 24 Live Demonstrations of 100-Gbps File Transfers Across LANs and WANs; 25 Untangling the Computing Landscape for Climate Simulations; 26 Simulating Galaxies and the Universe; 27 The Mysterious Origin of Stellar Masses; 28 Hot-Plasma Geysers on the Sun; 29 Turbulent Life of Kepler Stars; 30 Modeling Weather on the Sun; 31 Weather on Mars: The Meteorology of Gale Crater; 32 Enhancing Performance of NASAs High

  17. Creating ubiquitous intelligent sensing environments (CRUISE)

    DEFF Research Database (Denmark)

    Prasad, Neeli R.; Prasad, Ramjee

    2006-01-01

    The recent developments in the research and the technology have brought attention to the wireless sensor networks as one of the key enabling technologies in the next 10 years. Ubiquitous Intelligent Sensing Environments have promising future in supporting the everyday life of the European citizens...

  18. Controlling the Growth of Future LEO Debris Populations with Active Debris Removal

    Science.gov (United States)

    Liou, J.-C.; Johnson, N. L.; Hill, N. M.

    2008-01-01

    Active debris removal (ADR) was suggested as a potential means to remediate the low Earth orbit (LEO) debris environment as early as the 1980s. The reasons ADR has not become practical are due to its technical difficulties and the high cost associated with the approach. However, as the LEO debris populations continue to increase, ADR may be the only option to preserve the near-Earth environment for future generations. An initial study was completed in 2007 to demonstrate that a simple ADR target selection criterion could be developed to reduce the future debris population growth. The present paper summarizes a comprehensive study based on more realistic simulation scenarios, including fragments generated from the 2007 Fengyun-1C event, mitigation measures, and other target selection options. The simulations were based on the NASA long-term orbital debris projection model, LEGEND. A scenario, where at the end of mission lifetimes, spacecraft and upper stages were moved to 25-year decay orbits, was adopted as the baseline environment for comparison. Different annual removal rates and different ADR target selection criteria were tested, and the resulting 200-year future environment projections were compared with the baseline scenario. Results of this parametric study indicate that (1) an effective removal strategy can be developed based on the mass and collision probability of each object as the selection criterion, and (2) the LEO environment can be stabilized in the next 200 years with an ADR removal rate of five objects per year.

  19. Automation Rover for Extreme Environments

    Science.gov (United States)

    Sauder, Jonathan; Hilgemann, Evan; Johnson, Michael; Parness, Aaron; Hall, Jeffrey; Kawata, Jessie; Stack, Kathryn

    2017-01-01

    Almost 2,300 years ago the ancient Greeks built the Antikythera automaton. This purely mechanical computer accurately predicted past and future astronomical events long before electronics existed1. Automata have been credibly used for hundreds of years as computers, art pieces, and clocks. However, in the past several decades automata have become less popular as the capabilities of electronics increased, leaving them an unexplored solution for robotic spacecraft. The Automaton Rover for Extreme Environments (AREE) proposes an exciting paradigm shift from electronics to a fully mechanical system, enabling longitudinal exploration of the most extreme environments within the solar system.

  20. Performance Analysis of FEM Algorithmson GPU and Many-Core Architectures

    KAUST Repository

    Khurram, Rooh

    2015-04-27

    The roadmaps of the leading supercomputer manufacturers are based on hybrid systems, which consist of a mix of conventional processors and accelerators. This trend is mainly due to the fact that the power consumption cost of the future cpu-only Exascale systems will be unsustainable, thus accelerators such as graphic processing units (GPUs) and many-integrated-core (MIC) will likely be the integral part of the TOP500 (http://www.top500.org/) supercomputers, beyond 2020. The emerging supercomputer architecture will bring new challenges for the code developers. Continuum mechanics codes will particularly be affected, because the traditional synchronous implicit solvers will probably not scale on hybrid Exascale machines. In the previous study[1], we reported on the performance of a conjugate gradient based mesh motion algorithm[2]on Sandy Bridge, Xeon Phi, and K20c. In the present study we report on the comparative study of finite element codes, using PETSC and AmgX solvers on CPU and GPUs, respectively [3,4]. We believe this study will be a good starting point for FEM code developers, who are contemplating a CPU to accelerator transition.

  1. Sustainability—What Are the Odds? Envisioning the Future of Our Environment, Economy and Society

    Directory of Open Access Journals (Sweden)

    Stephen J. Jordan

    2013-03-01

    Full Text Available This article examines the concept of sustainability from a global perspective, describing how alternative futures might develop in the environmental, economic, and social dimensions. The alternatives to sustainability appear to be (a a catastrophic failure of life support, economies, and societies, or (b a radical technological revolution (singularity. The case is made that solutions may be found by developing a global vision of the future, estimating the probabilities of possible outcomes from multiple indicators, and looking holistically for the most likely paths to sustainability. Finally, an intuitive vision of these paths is offered as a starting point for discussion.

  2. Issues and challenges of information fusion in contested environments: panel results

    Science.gov (United States)

    Blasch, Erik; Kadar, Ivan; Chong, Chee; Jones, Eric K.; Tierno, Jorge E.; Fenstermacher, Laurie; Gorman, John D.; Levchuk, Georgiy

    2015-05-01

    With the plethora of information, there are many aspects to contested environments such as the protection of information, network privacy, and restricted observational and entry access. In this paper, we review and contrast the perspectives of challenges and opportunities for future developments in contested environments. The ability to operate in a contested environment would aid societal operations for highly congested areas with limited bandwidth such as transportation, the lack of communication and observations after a natural disaster, or planning for situations in which freedom of movement is restricted. Different perspectives were presented, but common themes included (1) Domain: targets and sensors, (2) network: communications, control, and social networks, and (3) user: human interaction and analytics. The paper serves as a summary and organization of the panel discussion as towards future concerns for research needs in contested environments.

  3. Design of New Food Technology: Social Shaping of Working Environment

    DEFF Research Database (Denmark)

    Broberg, Ole

    2000-01-01

    A five-year design process of a continuous process wok has been studied with the aim of elucidating the conditions for integrating working environment aspects. The design process is seen as a network building activity and as a social shaping process of the artefact. A working environment log...... is suggested as a tool designers can use to integrate considerations of future operators' working environment....

  4. Technology - environment - future

    International Nuclear Information System (INIS)

    1980-01-01

    This volume contains the materials of the meeting Scientific-technical progress and sociological alternatives organized in March 1980 by the Institute for Marxistic Studies and Research (IMSF). The goal of the meeting was to give a view at the present level of knowledge and discussion among the Federal Republic's Marxists on the direction, the social and ecological consequences of the development of science and technique under the conditions of capitalism. The arguments with the bourgeois opinions of the relation between technique and society was paid special attention to, as well as the discussion on alternative sociological concepts. (HSCH) [de

  5. THE ORGANIC AGRICULTURE – A WAY TO PROTECT THE ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    TABITA CORNELIA ADAMOV

    2008-05-01

    Full Text Available The future of the agriculture is a problem very frequently discussed by the specialists. During these debates, the organic agriculture has an advantage. The organic agriculture is taking into consideration two aspects: the human being and the environment. It is based on the prohibition of using chemicals like pesticides, herbicides or chemical fertilizers. This will offer healthy and natural products, but also will protect the environment. The usage of chemicals harms the environment and they remain in the soil for a long time. The substances used to protect the crops destroy the biodiversity, killing the insects, not only the harmful ones. The preservation of the biodiversity and the quality of the environment is an important objective for the beginning of this millennium, extended by the concern for the population health, for food safety assurance and for the improvement of life conditions. The existence of the future human society depends on applying in practice the concept of lasting economical development.

  6. Changing business environment: implications for farming

    OpenAIRE

    Malcolm, Bill

    2011-01-01

    The natural, technological, economic, political and social environment in which farmers farm constantly changes. History has lessons about change in agriculture and about farmers coping with change, though the future is unknowable and thus always surprising. The implication for farm operation is to prepare, do not predict.

  7. 2030 OUTLOOK FOR UKRAINE: SAFETY FOR THE FUTURE

    Directory of Open Access Journals (Sweden)

    G. Kharlamova

    2017-01-01

    Full Text Available The urgent question for Ukraine arised to look into the future with regard to the position of views on the future, which are divided by the world and authoritative international organizations. The possibility of Navigation in 2030 in terms of economic, environmental, demographic and investment securities is considered in the policy paper. The global environment with which Ukraine is facing in terms of security is changing rapidly and demands that the government understood and appreciated the challenges that threaten the future and integrity of the nation and the state. To achieve this understanding is only possible with a consistent and structured policy dialogue. Intellectual rigor and science-based approach should support this dialogue and can facilitate recognition of organizational factors that outline these challenges and threats. In this policy paper we have tried to combine drivers, futures, and effects.

  8. Computer network environment planning and analysis

    Science.gov (United States)

    Dalphin, John F.

    1989-01-01

    The GSFC Computer Network Environment provides a broadband RF cable between campus buildings and ethernet spines in buildings for the interlinking of Local Area Networks (LANs). This system provides terminal and computer linkage among host and user systems thereby providing E-mail services, file exchange capability, and certain distributed computing opportunities. The Environment is designed to be transparent and supports multiple protocols. Networking at Goddard has a short history and has been under coordinated control of a Network Steering Committee for slightly more than two years; network growth has been rapid with more than 1500 nodes currently addressed and greater expansion expected. A new RF cable system with a different topology is being installed during summer 1989; consideration of a fiber optics system for the future will begin soon. Summmer study was directed toward Network Steering Committee operation and planning plus consideration of Center Network Environment analysis and modeling. Biweekly Steering Committee meetings were attended to learn the background of the network and the concerns of those managing it. Suggestions for historical data gathering have been made to support future planning and modeling. Data Systems Dynamic Simulator, a simulation package developed at NASA and maintained at GSFC was studied as a possible modeling tool for the network environment. A modeling concept based on a hierarchical model was hypothesized for further development. Such a model would allow input of newly updated parameters and would provide an estimation of the behavior of the network.

  9. Future High Capacity Backbone Networks

    DEFF Research Database (Denmark)

    Wang, Jiayuan

    are proposed. The work focuses on energy efficient routing algorithms in a dynamic optical core network environment, with Generalized MultiProtocol Label Switching (GMPLS) as the control plane. Energy ef- ficient routing algorithms for energy savings and CO2 savings are proposed, and their performance...... aiming for reducing the dynamic part of the energy consumption of the network may increase the fixed part of the energy consumption meanwhile. In the second half of the thesis, the conflict between energy efficiency and Quality of Service (QoS) is addressed by introducing a novel software defined......This thesis - Future High Capacity Backbone Networks - deals with the energy efficiency problems associated with the development of future optical networks. In the first half of the thesis, novel approaches for using multiple/single alternative energy sources for improving energy efficiency...

  10. FORMATION OF THE TEACHER-RESEARCHER ACADEMIC CULTURE IN A DIGITAL CREATIVE ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Olena M. Semenoh

    2017-12-01

    Full Text Available The article outlines conceptual foundations of the future teachers-researchers academic culture formation in a digital creative environment. Academic culture of the researcher as an integral personal characteristic that is manifested in the culture of creative-critical thinking, academic virtue, scientific linguistic, narrative-digital culture has been investigated. The formation of the academic culture of the future teacher-researcher in terms of digital creative environment is seen as a complex, multidimensional process of qualitative changes, which happens in stages. The digital creative environment as a learning environment that involves the purposeful use of tools, technologies and information resources that enable creative expression of personality by means of digital technologies, integrating information and communication technologies, intellectual systems, human sensitivity and contextual experience of scientific and pedagogical activity has been defined.

  11. Role and Status of Quality Managers in Organisation of the Future

    Directory of Open Access Journals (Sweden)

    Vinko Bogataj

    2017-06-01

    Full Text Available Research question (RQ: What is discrepancy between status and role of quality managers in the Slovenian organisations now and what will be role and status of quality managers of the future? Aim: The aim of this paper is to show divergence between current and expected future status and role of quality managers (QM. Methods: Within the research of characteristics of quality management system (QMS in the Slovenian organisations a survey among the QM and the directors was conducted as well as the correlation analysis between the role of the QM and the results achieved by the organisations. Results: It was shown that »the advisor to the management« is the only role of the QM that has a significant positive correlation with the results achieved by the organisation. Organisation: The results of this research enable management to take appropriate steps in organisational development and integration of all projects on organisational changes leading to a common and comprehensive long-term concept. Society/Environment: The research offers some answers to the expected influence of changes in the environment on the future organisation of QMS. Originality: This research represents the first example of research of status and role of QM in the Slovenian organisations. Limitations / further research: This research project is limited to the Slovenian organisations with a certified QMS. In future, similar surveys could also be spread to other social environments such as Germany, Austria and the Czech Republic.

  12. Future-Focused Training Exercises with Alternative Coaching Conditions (CD-ROM)

    National Research Council Canada - National Science Library

    Kiser, Robert D; Childs, Jerry M; Leibrecht, Bruce C; Lockaby, Karen J

    2005-01-01

    .... This product presents the results of a research effort to advance the methodology for training companies and platoons, particularly in regard to the provision of coaching, in the future training environment...

  13. Moving Virtual Research Environments from high maintenance Stovepipes to Multi-purpose Sustainable Service-oriented Science Platforms

    Science.gov (United States)

    Klump, Jens; Fraser, Ryan; Wyborn, Lesley; Friedrich, Carsten; Squire, Geoffrey; Barker, Michelle; Moloney, Glenn

    2017-04-01

    The researcher of today is likely to be part of a team distributed over multiple sites that will access data from an external repository and then process the data on a public or private cloud or even on a large centralised supercomputer. They are increasingly likely to use a mixture of their own code, third party software and libraries, or even access global community codes. These components will be connected into a Virtual Research Environments (VREs) that will enable members of the research team who are not co-located to actively work together at various scales to share data, models, tools, software, workflows, best practices, infrastructures, etc. Many VRE's are built in isolation: designed to meet a specific research program with components tightly coupled and not capable of being repurposed for other use cases - they are becoming 'stovepipes'. The limited number of users of some VREs also means that the cost of maintenance per researcher can be unacceptably high. The alternative is to develop service-oriented Science Platforms that enable multiple communities to develop specialised solutions for specific research programs. The platforms can offer access to data, software tools and processing infrastructures (cloud, supercomputers) through globally distributed, interconnected modules. In Australia, the Virtual Geophysics Laboratory (VGL) was initially built to enable a specific set of researchers in government agencies access to specific data sets and a limited number of tools, that is now rapidly evolving into a multi-purpose Earth science platform with access to an increased variety of data, a broader range of tools, users from more sectors and a diversity of computational infrastructures. The expansion has been relatively easy, because of the architecture whereby data, tools and compute resources are loosely coupled via interfaces that are built on international standards and accessed as services wherever possible. In recent years, investments in

  14. Scalable geocomputation: evolving an environmental model building platform from single-core to supercomputers

    Science.gov (United States)

    Schmitz, Oliver; de Jong, Kor; Karssenberg, Derek

    2017-04-01

    There is an increasing demand to run environmental models on a big scale: simulations over large areas at high resolution. The heterogeneity of available computing hardware such as multi-core CPUs, GPUs or supercomputer potentially provides significant computing power to fulfil this demand. However, this requires detailed knowledge of the underlying hardware, parallel algorithm design and the implementation thereof in an efficient system programming language. Domain scientists such as hydrologists or ecologists often lack this specific software engineering knowledge, their emphasis is (and should be) on exploratory building and analysis of simulation models. As a result, models constructed by domain specialists mostly do not take full advantage of the available hardware. A promising solution is to separate the model building activity from software engineering by offering domain specialists a model building framework with pre-programmed building blocks that they combine to construct a model. The model building framework, consequently, needs to have built-in capabilities to make full usage of the available hardware. Developing such a framework providing understandable code for domain scientists and being runtime efficient at the same time poses several challenges on developers of such a framework. For example, optimisations can be performed on individual operations or the whole model, or tasks need to be generated for a well-balanced execution without explicitly knowing the complexity of the domain problem provided by the modeller. Ideally, a modelling framework supports the optimal use of available hardware whichsoever combination of model building blocks scientists use. We demonstrate our ongoing work on developing parallel algorithms for spatio-temporal modelling and demonstrate 1) PCRaster, an environmental software framework (http://www.pcraster.eu) providing spatio-temporal model building blocks and 2) parallelisation of about 50 of these building blocks using

  15. Nurse burnout and the working environment.

    Science.gov (United States)

    O'Mahony, Nuria

    2011-09-01

    This article examines levels of burnout experienced by emergency nurses and the characteristics of their work environment to determine if there is a relationship between the two. A literature review of recent articles on emergency nurses' burnout and contributing factors was undertaken. A quantitative study, in which nurses were asked to indicate the extent of their agreement with a series of statements on burnout and the working environment, was then undertaken, and the results were analysed to ascertain the extent to which the two topic are related. The results indicate that 52 per cent of nurses in an emergency department in Ireland experience high levels of emotional exhaustion and depersonalisation, which are significantly related to the nature of their work environment. Improvements to the environment and to education are required to reduce the risk of nurses developing burnout in the future.

  16. Learning in the e-environment: new media and learning for the future

    Directory of Open Access Journals (Sweden)

    Milan Matijević

    2015-03-01

    Full Text Available We live in times of rapid change in all areas of science, technology, communication and social life. Every day we are asked to what extent school prepares us for these changes and for life in a new, multimedia environment. Children and adolescents spend less time at school or in other settings of learning than they do outdoors or within other social communities (family, clubs, societies, religious institutions and the like. Experts must constantly inquire about what exactly influences learning and development in our rich media environment. The list of the most important life competences has significantly changed and expanded since the last century. Educational experts are attempting to predict changes in the content and methodology of learning at the beginning of the 21st century. Answers are sought to key questions such as: what should one learn; how should one learn; where should one learn; why should one learn; and how do these answers relate to the new learning environment? In his examination of the way children and young people learn and grow up, the author places special attention on the relationship between personal and non-personal communication (e.g. the internet, mobile phones and different types of e-learning. He deals with today's questions by looking back to some of the more prominent authors and studies of the past fifty years that tackled identical or similar questions (Alvin Toffler, Ivan Illich, George Orwell, and the members of the Club of Rome. The conclusion reached is that in today's world of rapid and continuous change, it is much more crucial than in the last century, both, to be able to learn, and to adapt to learning with the help of new media.

  17. Future of dual-use space awareness technologies

    Science.gov (United States)

    Kislitsyn, Boris V.; Idell, Paul S.; Crawford, Linda L.

    2000-10-01

    The use of all classes of space systems, whether owned by defense, civil, commercial, scientific, allied or foreign organizations, is increasing rapidly. In turn, the surveillance of such systems and activities in space are of interest to all parties. Interests will only increase in time and with the new ways to exploit the space environment. However, the current space awareness infrastructure and capabilities are not maintaining pace with the demands and advanced technologies being brought online. The use of surveillance technologies, some of which will be discussed in the conference, will provide us the eventual capability to observe and assess the environment, satellite health and status, and the uses of assets on orbit. This provides us a space awareness that is critical to the military operator and to the commercial entrepreneur for their respective successes. Thus the term 'dual-use technologies' has become a reality. For this reason we will briefly examine the background, current, and future technology trends that can lead us to some insights for future products and services.

  18. Wine tourism and sustainable environments

    Directory of Open Access Journals (Sweden)

    M.ª Luisa González San José

    2017-11-01

    Full Text Available Sustainability is a model of development in which the present actions should not compromise the future of future generations, and is linked to economic and social development which must respect the environment. Wine tourism or enotourism is a pleasant mode of tourism that combines the pleasure of wine-tasting, with cultural aspects related to the wine culture developing in wine regions over time until the present day. It can be affirmed that wine culture, and its use through wine tourism experiences, is clearly correlated to social (socially equitable, economic (economically feasible, environmental (environmentally sound and cultural aspects of the sustainability of winegrowing regions and territories.

  19. Aging Well and the Environment: Toward an Integrative Model and Research Agenda for the Future

    Science.gov (United States)

    Wahl, Hans-Werner; Iwarsson, Susanne; Oswald, Frank

    2012-01-01

    Purpose of the Study: The effects of the physical-spatial-technical environment on aging well have been overlooked both conceptually and empirically. In the spirit of M. Powell Lawton's seminal work on aging and environment, this article attempts to rectify this situation by suggesting a new model of how older people interact with their…

  20. The potential natural vegetation of eastern Africa distribution, conservation and future changes

    DEFF Research Database (Denmark)

    van Breugel, Paulo

    and sustainable management of the natural environment. There is therefore an urgent need for information that allow us to assess the current status of the region’s natural environment and to predict how this may change under future climates. This thesis aims to improve our knowledge on natural vegetation...... and how this is likely to change under different climate change scenarios. Chapter 4 presents an environmental gap analysis to prioritize conservation efforts in eastern Africa, based on an evaluation of the environmental representativeness of protected areas and an assessment of the level of threat...... distribution in eastern African, examine how this may change under future climates, and how this can be used to identify conservation priorities in the region. Chapter 1 presents a brief overview of the concept of the potential natural vegetation (PNV), synthesizes the general findings and discusses future...