WorldWideScience

Sample records for power supercomputer users

  1. Proceedings of the first energy research power supercomputer users symposium

    International Nuclear Information System (INIS)

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. ''Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University

  2. World's fastest supercomputer opens up to users

    Science.gov (United States)

    Xin, Ling

    2016-08-01

    China's latest supercomputer - Sunway TaihuLight - has claimed the crown as the world's fastest computer according to the latest TOP500 list, released at the International Supercomputer Conference in Frankfurt in late June.

  3. A training program for scientific supercomputing users

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  4. Applications of supercomputing and the utility industry: Calculation of power transfer capabilities

    International Nuclear Information System (INIS)

    Jensen, D.D.; Behling, S.R.; Betancourt, R.

    1990-01-01

    Numerical models and iterative simulation using supercomputers can furnish cost-effective answers to utility industry problems that are all but intractable using conventional computing equipment. An example of the use of supercomputers by the utility industry is the determination of power transfer capability limits for power transmission systems. This work has the goal of markedly reducing the run time of transient stability codes used to determine power distributions following major system disturbances. To date, run times of several hours on a conventional computer have been reduced to several minutes on state-of-the-art supercomputers, with further improvements anticipated to reduce run times to less than a minute. In spite of the potential advantages of supercomputers, few utilities have sufficient need for a dedicated in-house supercomputing capability. This problem is resolved using a supercomputer center serving a geographically distributed user base coupled via high speed communication networks

  5. A user-friendly web portal for T-Coffee on supercomputers

    Directory of Open Access Journals (Sweden)

    Koetsier Jos

    2011-05-01

    Full Text Available Abstract Background Parallel T-Coffee (PTC was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

  6. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  7. Power User Interface

    Science.gov (United States)

    Pfister, Robin; McMahon, Joe

    2006-01-01

    Power User Interface 5.0 (PUI) is a system of middleware, written for expert users in the Earth-science community, PUI enables expedited ordering of data granules on the basis of specific granule-identifying information that the users already know or can assemble. PUI also enables expert users to perform quick searches for orderablegranule information for use in preparing orders. PUI 5.0 is available in two versions (note: PUI 6.0 has command-line mode only): a Web-based application program and a UNIX command-line- mode client program. Both versions include modules that perform data-granule-ordering functions in conjunction with external systems. The Web-based version works with Earth Observing System Clearing House (ECHO) metadata catalog and order-entry services and with an open-source order-service broker server component, called the Mercury Shopping Cart, that is provided separately by Oak Ridge National Laboratory through the Department of Energy. The command-line version works with the ECHO metadata and order-entry process service. Both versions of PUI ultimately use ECHO to process an order to be sent to a data provider. Ordered data are provided through means outside the PUI software system.

  8. Supercomputational science

    CERN Document Server

    Wilson, S

    1990-01-01

    In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School ...

  9. Japanese supercomputer technology

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Ewald, R.H.; Worlton, W.J.

    1982-01-01

    In February 1982, computer scientists from the Los Alamos National Laboratory and Lawrence Livermore National Laboratory visited several Japanese computer manufacturers. The purpose of these visits was to assess the state of the art of Japanese supercomputer technology and to advise Japanese computer vendors of the needs of the US Department of Energy (DOE) for more powerful supercomputers. The Japanese foresee a domestic need for large-scale computing capabilities for nuclear fusion, image analysis for the Earth Resources Satellite, meteorological forecast, electrical power system analysis (power flow, stability, optimization), structural and thermal analysis of satellites, and very large scale integrated circuit design and simulation. To meet this need, Japan has launched an ambitious program to advance supercomputer technology. This program is described

  10. Introduction to Reconfigurable Supercomputing

    CERN Document Server

    Lanzagorta, Marco; Rosenberg, Robert

    2010-01-01

    This book covers technologies, applications, tools, languages, procedures, advantages, and disadvantages of reconfigurable supercomputing using Field Programmable Gate Arrays (FPGAs). The target audience is the community of users of High Performance Computers (HPe who may benefit from porting their applications into a reconfigurable environment. As such, this book is intended to guide the HPC user through the many algorithmic considerations, hardware alternatives, usability issues, programming languages, and design tools that need to be understood before embarking on the creation of reconfigur

  11. Grassroots Supercomputing

    CERN Multimedia

    Buchanan, Mark

    2005-01-01

    What started out as a way for SETI to plow through its piles or radio-signal data from deep space has turned into a powerful research tool as computer users acrosse the globe donate their screen-saver time to projects as diverse as climate-change prediction, gravitational-wave searches, and protein folding (4 pages)

  12. The ETA10 supercomputer system

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, Inc. ETA 10 is a next-generation supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamics MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed. (orig.)

  13. Mistral Supercomputer Job History Analysis

    OpenAIRE

    Zasadziński, Michał; Muntés-Mulero, Victor; Solé, Marc; Ludwig, Thomas

    2018-01-01

    In this technical report, we show insights and results of operational data analysis from petascale supercomputer Mistral, which is ranked as 42nd most powerful in the world as of January 2018. Data sources include hardware monitoring data, job scheduler history, topology, and hardware information. We explore job state sequences, spatial distribution, and electric power patterns.

  14. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    Science.gov (United States)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  15. Computational plasma physics and supercomputers

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1984-09-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular codes, but parallel processing poses new coding difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematics

  16. What is supercomputing ?

    International Nuclear Information System (INIS)

    Asai, Kiyoshi

    1992-01-01

    Supercomputing means the high speed computation using a supercomputer. Supercomputers and the technical term ''supercomputing'' have spread since ten years ago. The performances of the main computers installed so far in Japan Atomic Energy Research Institute are compared. There are two methods to increase computing speed by using existing circuit elements, parallel processor system and vector processor system. CRAY-1 is the first successful vector computer. Supercomputing technology was first applied to meteorological organizations in foreign countries, and to aviation and atomic energy research institutes in Japan. The supercomputing for atomic energy depends on the trend of technical development in atomic energy, and the contents are divided into the increase of computing speed in existing simulation calculation and the acceleration of the new technical development of atomic energy. The examples of supercomputing in Japan Atomic Energy Research Institute are reported. (K.I.)

  17. Green Power Partnership 100 Green Power Users

    Science.gov (United States)

    EPA's Green Power Partnership is a voluntary program designed to reduce the environmental impact of electricity generation by promoting renewable energy. Partners on this list use green power to meet 100 of their U.S. organization-wide electricity use.

  18. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee; Kaushik, Dinesh; Winfer, Andrew

    2011-01-01

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  19. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee

    2011-11-15

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  20. ATLAS Software Installation on Supercomputers

    CERN Document Server

    Undrus, Alexander; The ATLAS collaboration

    2018-01-01

    PowerPC and high performance computers (HPC) are important resources for computing in the ATLAS experiment. The future LHC data processing will require more resources than Grid computing, currently using approximately 100,000 cores at well over 100 sites, can provide. Supercomputers are extremely powerful as they use resources of hundreds of thousands CPUs joined together. However their architectures have different instruction sets. ATLAS binary software distributions for x86 chipsets do not fit these architectures, as emulation of these chipsets results in huge performance loss. This presentation describes the methodology of ATLAS software installation from source code on supercomputers. The installation procedure includes downloading the ATLAS code base as well as the source of about 50 external packages, such as ROOT and Geant4, followed by compilation, and rigorous unit and integration testing. The presentation reports the application of this procedure at Titan HPC and Summit PowerPC at Oak Ridge Computin...

  1. Desktop supercomputer: what can it do?

    Science.gov (United States)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-12-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  2. Desktop supercomputer: what can it do?

    International Nuclear Information System (INIS)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-01-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  3. NASA Advanced Supercomputing Facility Expansion

    Science.gov (United States)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  4. The ETA systems plans for supercomputers

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, is a class VII supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamic MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed

  5. Solar thermal electric power information user study

    Energy Technology Data Exchange (ETDEWEB)

    Belew, W.W.; Wood, B.L.; Marle, T.L.; Reinhardt, C.L.

    1981-02-01

    The results of a series of telephone interviews with groups of users of information on solar thermal electric power are described. These results, part of a larger study on many different solar technologies, identify types of information each group needed and the best ways to get information to each group. The report is 1 of 10 discussing study results. The overall study provides baseline data about information needs in the solar community. An earlier study identified the information user groups in the solar community and the priority (to accelerate solar energy commercialization) of getting information to each group. In the current study only high-priority groups were examined. Results from five solar thermal electric power groups of respondents are analyzed: DOE-Funded Researchers, Non-DOE-Funded Researchers, Representatives of Utilities, Electric Power Engineers, and Educators. The data will be used as input to the determination of information products and services the Solar Energy Research Institute, the Solar Energy Information Data Bank Network, and the entire information outreach community should be preparing and disseminating.

  6. Research center Juelich to install Germany's most powerful supercomputer new IBM System for science and research will achieve 5.8 trillion computations per second

    CERN Multimedia

    2002-01-01

    "The Research Center Juelich, Germany, and IBM today announced that they have signed a contract for the delivery and installation of a new IBM supercomputer at the Central Institute for Applied Mathematics" (1/2 page).

  7. Supercomputers to transform Science

    CERN Multimedia

    2006-01-01

    "New insights into the structure of space and time, climate modeling, and the design of novel drugs, are but a few of the many research areas that will be transforned by the installation of three supercomputers at the Unversity of Bristol." (1/2 page)

  8. Supercomputers Of The Future

    Science.gov (United States)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  9. TOP500 Supercomputers for June 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  10. Status reports of supercomputing astrophysics in Japan

    International Nuclear Information System (INIS)

    Nakamura, Takashi; Nagasawa, Mikio

    1990-01-01

    The Workshop on Supercomputing Astrophysics was held at National Laboratory for High Energy Physics (KEK, Tsukuba) from August 31 to September 2, 1989. More than 40 participants of physicists, astronomers were attendant and discussed many topics in the informal atmosphere. The main purpose of this workshop was focused on the theoretical activities in computational astrophysics in Japan. It was also aimed to promote effective collaboration between the numerical experimentists working on supercomputing technique. The various subjects of the presented papers of hydrodynamics, plasma physics, gravitating systems, radiative transfer and general relativity are all stimulating. In fact, these numerical calculations become possible now in Japan owing to the power of Japanese supercomputer such as HITAC S820, Fujitsu VP400E and NEC SX-2. (J.P.N.)

  11. Computational plasma physics and supercomputers. Revision 1

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1985-01-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular models, but parallel processing poses new programming difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematical models

  12. Comments on the parallelization efficiency of the Sunway TaihuLight supercomputer

    OpenAIRE

    Végh, János

    2016-01-01

    In the world of supercomputers, the large number of processors requires to minimize the inefficiencies of parallelization, which appear as a sequential part of the program from the point of view of Amdahl's law. The recently suggested new figure of merit is applied to the recently presented supercomputer, and the timeline of "Top 500" supercomputers is scrutinized using the metric. It is demonstrated, that in addition to the computing performance and power consumption, the new supercomputer i...

  13. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  14. PubChem Power User Gateway (PUG)

    Data.gov (United States)

    U.S. Department of Health & Human Services — PUG provides access to PubChem services via a programmatic interface. Users may download data, initiate chemical structure searches, standardize chemical structures...

  15. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  16. JINR supercomputer of the module type for event parallel analysis

    International Nuclear Information System (INIS)

    Kolpakov, I.F.; Senner, A.E.; Smirnov, V.A.

    1987-01-01

    A model of a supercomputer with 50 million of operations per second is suggested. Its realization allows one to solve JINR data analysis problems for large spectrometers (in particular DELPHY collaboration). The suggested module supercomputer is based on 32-bit commercial available microprocessor with a processing rate of about 1 MFLOPS. The processors are combined by means of VME standard busbars. MicroVAX-11 is a host computer organizing the operation of the system. Data input and output is realized via microVAX-11 computer periphery. Users' software is based on the FORTRAN-77. The supercomputer is connected with a JINR net port and all JINR users get an access to the suggested system

  17. [Addictology, promoting users' power to act].

    Science.gov (United States)

    Morel, Alain

    2018-01-01

    The notion of risk reduction applies to all uses, drinking of alcohol and smoking including, addictions without drugs likewise. With regard to drugs, mentalities change. We now talk more of risks than fault or deviance. Following, collaboration between health professionals and users, sharing and cooperation are the conditions necessary to develop a modern humanist and social addictology approach. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  18. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  19. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  20. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Esteban [University of Pittsburgh; Ni, Xiang [University of Illinois at Urbana-Champaign; Jones, Terry R [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  1. Quantum Hamiltonian Physics with Supercomputers

    International Nuclear Information System (INIS)

    Vary, James P.

    2014-01-01

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed

  2. Quantum Hamiltonian Physics with Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Vary, James P.

    2014-06-15

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed.

  3. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    Energy Technology Data Exchange (ETDEWEB)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S [Earth Sciences Department. Barcelona Supercomputing Center. Barcelona (Spain); Cuevas, E [Izanaa Atmospheric Research Center. Agencia Estatal de Meteorologia, Tenerife (Spain); Nickovic, S [Atmospheric Research and Environment Branch, World Meteorological Organization, Geneva (Switzerland)], E-mail: carlos.perez@bsc.es

    2009-03-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  4. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    International Nuclear Information System (INIS)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S; Cuevas, E; Nickovic, S

    2009-01-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  5. User Manual for SSG Power Simulation 2

    DEFF Research Database (Denmark)

    Jensen, Palle Meinert; Gilling, Lasse; Kofoed, Jens Peter

    This manual gives a detailed description of the use of the computer program SSG Power Simulation 2. Furthermore, the underlying mathematics and algorithms are briefly described. The program is based on experimental data from model testing of Seawave Slot-Cone Generator (SSG) presented in Kofoed...

  6. Wheelchair users' perceptions of and experiences with power assist wheels.

    Science.gov (United States)

    Giacobbi, Peter R; Levy, Charles E; Dietrich, Frederick D; Winkler, Sandra Hubbard; Tillman, Mark D; Chow, John W

    2010-03-01

    To assess wheelchair users' perceptions of and experiences with power assist wheels using qualitative interview methods. Qualitative evaluations were conducted in a laboratory setting with a focus on users' experiences using power assist wheel in their naturalistic environments. Participants consisted of seven women and 13 men (M(age) = 42.75, SD = 14.68) that included one African American, one Hispanic, 17 whites, and one individual from Zambia. Qualitative interviews were conducted before, during, and after use of a power assist wheel. Main outcome measures included the wheelchair users' evaluations and experiences related to the use of power assist wheels. The primary evaluations included wheeling on challenging terrains, performance of novel activities, social/family aspects, fatigue, and pain. These descriptions indicated that most participants perceived positive experiences with the power assist wheels, including access to new and different activities. Secondary evaluations indicated that the unit was cumbersome and prohibitive for some participants because of difficulties with transport in and out of a vehicle and battery life. Most participants felt that power assist wheels provided more independence and social opportunities. The power assist wheel seems to offer physical and social benefits for most wheelers. Clinicians should consider users' home environment and overall life circumstances before prescribing.

  7. A workbench for tera-flop supercomputing

    International Nuclear Information System (INIS)

    Resch, M.M.; Kuester, U.; Mueller, M.S.; Lang, U.

    2003-01-01

    Supercomputers currently reach a peak performance in the range of TFlop/s. With but one exception - the Japanese Earth Simulator - none of these systems has so far been able to also show a level of sustained performance for a variety of applications that comes close to the peak performance. Sustained TFlop/s are therefore rarely seen. The reasons are manifold and are well known: Bandwidth and latency both for main memory and for the internal network are the key internal technical problems. Cache hierarchies with large caches can bring relief but are no remedy to the problem. However, there are not only technical problems that inhibit the full exploitation by scientists of the potential of modern supercomputers. More and more organizational issues come to the forefront. This paper shows the approach of the High Performance Computing Center Stuttgart (HLRS) to deliver a sustained performance of TFlop/s for a wide range of applications from a large group of users spread over Germany. The core of the concept is the role of the data. Around this we design a simulation workbench that hides the complexity of interacting computers, networks and file systems from the user. (authors)

  8. Computational Dimensionalities of Global Supercomputing

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2013-12-01

    Full Text Available This Invited Paper pertains to subject of my Plenary Keynote Speech at the 17th World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI 2013 held in Orlando, Florida on July 9-12, 2013. The title of my Plenary Keynote Speech was: "Dimensionalities of Computation: from Global Supercomputing to Data, Text and Web Mining" but this Invited Paper will focus only on the "Computational Dimensionalities of Global Supercomputing" and is based upon a summary of the contents of several individual articles that have been previously written with myself as lead author and published in [75], [76], [77], [78], [79], [80] and [11]. The topics of these of the Plenary Speech included Overview of Current Research in Global Supercomputing [75], Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing [76], Data Mining Supercomputing with SAS™ JMP® Genomics ([77], [79], [80], and Visualization by Supercomputing Data Mining [81]. ______________________ [11.] Committee on the Future of Supercomputing, National Research Council (2003, The Future of Supercomputing: An Interim Report, ISBN-13: 978-0-309-09016- 2, http://www.nap.edu/catalog/10784.html [75.] Segall, Richard S.; Zhang, Qingyu and Cook, Jeffrey S.(2013, "Overview of Current Research in Global Supercomputing", Proceedings of Forty- Fourth Meeting of Southwest Decision Sciences Institute (SWDSI, Albuquerque, NM, March 12-16, 2013. [76.] Segall, Richard S. and Zhang, Qingyu (2010, "Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing", Proceedings of 5th INFORMS Workshop on Data Mining and Health Informatics, Austin, TX, November 6, 2010. [77.] Segall, Richard S., Zhang, Qingyu and Pierce, Ryan M.(2010, "Data Mining Supercomputing with SAS™ JMP®; Genomics: Research-in-Progress, Proceedings of 2010 Conference on Applied Research in Information Technology, sponsored by

  9. Supercomputing and related national projects in Japan

    International Nuclear Information System (INIS)

    Miura, Kenichi

    1985-01-01

    Japanese supercomputer development activities in the industry and research projects are outlined. Architecture, technology, software, and applications of Fujitsu's Vector Processor Systems are described as an example of Japanese supercomputers. Applications of supercomputers to high energy physics are also discussed. (orig.)

  10. Supercomputers and quantum field theory

    International Nuclear Information System (INIS)

    Creutz, M.

    1985-01-01

    A review is given of why recent simulations of lattice gauge theories have resulted in substantial demands from particle theorists for supercomputer time. These calculations have yielded first principle results on non-perturbative aspects of the strong interactions. An algorithm for simulating dynamical quark fields is discussed. 14 refs

  11. Algorithms for supercomputers

    International Nuclear Information System (INIS)

    Alder, B.J.

    1986-01-01

    Better numerical procedures, improved computational power and additional physical insights have contributed significantly to progress in dealing with classical and quantum statistical mechanics problems. Past developments are discussed and future possibilities outlined

  12. Algorithms for supercomputers

    International Nuclear Information System (INIS)

    Alder, B.J.

    1985-12-01

    Better numerical procedures, improved computational power and additional physical insights have contributed significantly to progress in dealing with classical and quantum statistical mechanics problems. Past developments are discussed and future possibilities outlined

  13. An assessment of worldwide supercomputer usage

    Energy Technology Data Exchange (ETDEWEB)

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  14. Power reserves in the end-user market

    International Nuclear Information System (INIS)

    Livik, K.; Mo, B.

    1994-10-01

    Based on a detailed modelling of the end-user electric power market it is evaluated how a selection of energy conservation efforts will affect Norway's system curve for the day of maximum power load. The analysis given in this report is based on empiric load data and statistical analyses of how changing simultaneity in power consumption by the end-users contribute to an aggregated effect on the maximum power of the system. The computer code PMAX was used for the simulation. The following efforts are considered: controlling water heaters in the housing sector, replacing electricity by oil for heating in the housing sector and in the service sector, implementation of temperature control with reduced temperature at night/day in the housing and service sectors, energy conservation in the service sector, heat recovery in the service sector. Transition from electricity to oil in heating is the effort which most strongly affects the energy consumption and power load on the day of peak power load. The discussion excludes energy-intensive industries, pumps and boilers. 11 refs., 16 figs., 6 tabs

  15. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  16. Supercomputing Centers and Electricity Service Providers

    DEFF Research Database (Denmark)

    Patki, Tapasya; Bates, Natalie; Ghatikar, Girish

    2016-01-01

    from a detailed, quantitative survey-based analysis and compare the perspectives of the European grid and SCs to the ones of the United States (US). We then show that contrary to the expectation, SCs in the US are more open toward cooperating and developing demand-management strategies with their ESPs......Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates...... this problem. In order to develop a symbiotic relationship between the SCs and their ESPs and to support effective power management at all levels, it is critical to understand and analyze how the existing relationships were formed and how these are expected to evolve. In this paper, we first present results...

  17. Absorbed Power Minimization in Cellular Users with Circular Antenna Arrays

    Science.gov (United States)

    Christofilakis, Vasilis; Votis, Constantinos; Tatsis, Giorgos; Raptis, Vasilis; Kostarakis, Panos

    2010-01-01

    Nowadays electromagnetic pollution of non ionizing radiation generated by cellular phones concerns millions of people. In this paper the use of circular antenna array as a means of minimizing the absorbed power by cellular phone users is introduced. In particular, the different characteristics of radiation patterns produced by a helical conventional antenna used in mobile phones operating at 900 MHz and those produced by a circular antenna array, hypothetically used in the same mobile phones, are in detail examined. Furthermore, the percentage of decrement of the power absorbed in the head as a function of direction of arrival is estimated for the circular antenna array.

  18. Power-Consumption Measurements for LTE User Equipment

    DEFF Research Database (Denmark)

    Lauridsen, Mads

    wireless communications test set and the Agilent N6705B DC power analyzer to establish a power consumption model for LTE user equipment (UE). The model is useful when you need to examine the UE battery life in system-level simulations. We will explain how the Agilent equipment can be used in manual tests......, but we do not discuss how to make automated tests (for example, using VEE software). In this application note, we analyze smartphones adhering to the 3GPP LTE standard [1]....

  19. Status of supercomputers in the US

    International Nuclear Information System (INIS)

    Fernbach, S.

    1985-01-01

    Current Supercomputers; that is, the Class VI machines which first became available in 1976 are being delivered in greater quantity than ever before. In addition, manufacturers are busily working on Class VII machines to be ready for delivery in CY 1987. Mainframes are being modified or designed to take on some features of the supercomputers and new companies with the intent of either competing directly in the supercomputer arena or in providing entry-level systems from which to graduate to supercomputers are springing up everywhere. Even well founded organizations like IBM and CDC are adding machines with vector instructions in their repertoires. Japanese - manufactured supercomputers are also being introduced into the U.S. Will these begin to compete with those of U.S. manufacture. Are they truly competitive. It turns out that both from the hardware and software points of view they may be superior. We may be facing the same problems in supercomputers that we faced in videosystems

  20. Synapse:neural network for predict power consumption: users guide

    Energy Technology Data Exchange (ETDEWEB)

    Muller, C; Mangeas, M; Perrot, N

    1994-08-01

    SYNAPSE is forecasting tool designed to predict power consumption in metropolitan France on the half hour time scale. Some characteristics distinguish this forecasting model from those which already exist. In particular, it is composed of numerous neural networks. The idea for using many neural networks arises from past tests. These tests showed us that a single neural network is not able to solve the problem correctly. From this result, we decided to perform unsupervised classification of the 24 consumption curves. From this classification, six classes appeared, linked with the weekdays: Mondays, Tuesdays, Wednesdays, Thursdays, Fridays, Saturdays, Sundays, holidays and bridge days. For each class and for each half hour, two multilayer perceptrons are built. The two of them forecast the power for one particular half hour, and for a day including one of the determined class. The input of these two network are different: the first one (short time forecasting) includes the powers for the most recent half hour and relative power of the previous day; the second (medium time forecasting) includes only the relative power of the previous day. A process connects the results of every networks and allows one to forecast more than one half-hour in advance. In this process, short time forecasting networks and medium time forecasting networks are used differently. The first kind of neural networks gives good results on the scale of one day. The second one gives good forecasts for the next predicted powers. In this note, the organization of the SYNAPSE program is detailed, and the user`s menu is described. This first version of synapse works and should allow the APC group to evaluate its utility. (authors). 6 refs., 2 appends.

  1. TOP500 Supercomputers for June 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  2. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-01-01

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  3. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  4. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    Science.gov (United States)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  5. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy; Katz, Daniel S.; Binkowski, T. Andrew; Zhong, Xiaoliang; Heinonen, Olle; Karpeyev, Dmitry; Wilde, Michael

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt's sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.

  6. Supercomputer applications in nuclear research

    International Nuclear Information System (INIS)

    Ishiguro, Misako

    1992-01-01

    The utilization of supercomputers in Japan Atomic Energy Research Institute is mainly reported. The fields of atomic energy research which use supercomputers frequently and the contents of their computation are outlined. What is vectorizing is simply explained, and nuclear fusion, nuclear reactor physics, the hydrothermal safety of nuclear reactors, the parallel property that the atomic energy computations of fluids and others have, the algorithm for vector treatment and the effect of speed increase by vectorizing are discussed. At present Japan Atomic Energy Research Institute uses two systems of FACOM VP 2600/10 and three systems of M-780. The contents of computation changed from criticality computation around 1970, through the analysis of LOCA after the TMI accident, to nuclear fusion research, the design of new type reactors and reactor safety assessment at present. Also the method of using computers advanced from batch processing to time sharing processing, from one-dimensional to three dimensional computation, from steady, linear to unsteady nonlinear computation, from experimental analysis to numerical simulation and so on. (K.I.)

  7. INTEL: Intel based systems move up in supercomputing ranks

    CERN Multimedia

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  8. The Pawsey Supercomputer geothermal cooling project

    Science.gov (United States)

    Regenauer-Lieb, K.; Horowitz, F.; Western Australian Geothermal Centre Of Excellence, T.

    2010-12-01

    The Australian Government has funded the Pawsey supercomputer in Perth, Western Australia, providing computational infrastructure intended to support the future operations of the Australian Square Kilometre Array radiotelescope and to boost next-generation computational geosciences in Australia. Supplementary funds have been directed to the development of a geothermal exploration well to research the potential for direct heat use applications at the Pawsey Centre site. Cooling the Pawsey supercomputer may be achieved by geothermal heat exchange rather than by conventional electrical power cooling, thus reducing the carbon footprint of the Pawsey Centre and demonstrating an innovative green technology that is widely applicable in industry and urban centres across the world. The exploration well is scheduled to be completed in 2013, with drilling due to commence in the third quarter of 2011. One year is allocated to finalizing the design of the exploration, monitoring and research well. Success in the geothermal exploration and research program will result in an industrial-scale geothermal cooling facility at the Pawsey Centre, and will provide a world-class student training environment in geothermal energy systems. A similar system is partially funded and in advanced planning to provide base-load air-conditioning for the main campus of the University of Western Australia. Both systems are expected to draw ~80-95 degrees C water from aquifers lying between 2000 and 3000 meters depth from naturally permeable rocks of the Perth sedimentary basin. The geothermal water will be run through absorption chilling devices, which only require heat (as opposed to mechanical work) to power a chilled water stream adequate to meet the cooling requirements. Once the heat has been removed from the geothermal water, licensing issues require the water to be re-injected back into the aquifer system. These systems are intended to demonstrate the feasibility of powering large-scale air

  9. Research on trading patterns of large users' direct power purchase considering consumption of clean energy

    Science.gov (United States)

    Guojun, He; Lin, Guo; Zhicheng, Yu; Xiaojun, Zhu; Lei, Wang; Zhiqiang, Zhao

    2017-03-01

    In order to reduce the stochastic volatility of supply and demand, and maintain the electric power system's stability after large scale stochastic renewable energy sources connected to grid, the development and consumption should be promoted by marketing means. Bilateral contract transaction model of large users' direct power purchase conforms to the actual situation of our country. Trading pattern of large users' direct power purchase is analyzed in this paper, characteristics of each power generation are summed up, and centralized matching mode is mainly introduced. Through the establishment of power generation enterprises' priority evaluation index system and the analysis of power generation enterprises' priority based on fuzzy clustering, the sorting method of power generation enterprises' priority in trading patterns of large users' direct power purchase is put forward. Suggestions for trading mechanism of large users' direct power purchase are offered by this method, which is good for expand the promotion of large users' direct power purchase further.

  10. GasFair/PowerFair/EnergyUser '98 : Presentations

    International Nuclear Information System (INIS)

    1998-01-01

    Papers presented at three conferences, reviewing recent activities in the natural gas and electric power industries and matters of concern to energy consumers in North America are contained on this single CD-ROM. Seven presentations relate to the natural gas industry, nine to electric power generation and transmission, and ten to a wide range of topics dealing with various concerns relating to the environment, financial and cost management aspects of energy utilization. Speakers at the GasFair sessions discussed recent developments in natural gas supply, marketing, purchasing, risk management and the impact of energy convergence on natural gas. Presentations at the PowerFair segment dealt with issues in electricity deregulation, supply and financing, purchasing and marketing. Issues discussed at the EnergyUser sessions included presentations dealing with ways to save costs with energy technology and integrated services, environmental performance contracting and engineering and energy cost control. The CD-ROM also contains the summary of a round table discussion and five individual presentations made at the Natural Gas Pipeline Forum. This pre-conference institute dealt with the likely effects of new pipelines and pipeline extensions on North American natural gas consumers. . tabs., figs

  11. Do Smartphone Power Users Protect Mobile Privacy Better than Nonpower Users? Exploring Power Usage as a Factor in Mobile Privacy Protection and Disclosure.

    Science.gov (United States)

    Kang, Hyunjin; Shin, Wonsun

    2016-03-01

    This study examines how consumers' competence at using smartphone technology (i.e., power usage) affects their privacy protection behaviors. A survey conducted with smartphone users shows that power usage influences privacy protection behavior not only directly but also indirectly through privacy concerns and trust placed in mobile service providers. A follow-up experiment indicates that the effects of power usage on smartphone users' information management can be a function of content personalization. Users, high on power usage, are less likely to share personal information on personalized mobile sites, but they become more revealing when they interact with nonpersonalized mobile sites.

  12. Fundamental Science with Pulsed Power: Research Opportunities and User Meeting.

    Energy Technology Data Exchange (ETDEWEB)

    Mattsson, Thomas Kjell Rene [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wootton, Alan James [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sinars, Daniel Brian [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Spaulding, Dylan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Winget, Don [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    The fifth Fundamental Science with Pulsed Power: Research Opportunities and User Meeting was held in Albuquerque, NM, July 20-­23, 2014. The purpose of the workshop was to bring together leading scientists in four research areas with active fundamental science research at Sandia’s Z facility: Magnetized Liner Inertial Fusion (MagLIF), Planetary Science, Astrophysics, and Material Science. The workshop was focused on discussing opportunities for high-­impact research using Sandia’s Z machine, a future 100 GPa class facility, and possible topics for growing the academic (off-Z-campus) science relevant to the Z Fundamental Science Program (ZFSP) and related projects in astrophysics, planetary science, MagLIF- relevant magnetized HED science, and materials science. The user meeting was for Z collaborative users to: a) hear about the Z accelerator facility status and plans, b) present the status of their research, and c) be provided with a venue to meet and work as groups. Following presentations by Mark Herrmann and Joel Lash on the fundamental science program on Z and the status of the Z facility where plenary sessions for the four research areas. The third day of the workshop was devoted to breakout sessions in the four research areas. The plenary-­ and breakout sessions were for the four areas organized by Dan Sinars (MagLIF), Dylan Spaulding (Planetary Science), Don Winget and Jim Bailey (Astrophysics), and Thomas Mattsson (Material Science). Concluding the workshop were an outbrief session where the leads presented a summary of the discussions in each working group to the full workshop. A summary of discussions and conclusions from each of the research areas follows and the outbrief slides are included as appendices.

  13. Autonomy and Housing Accessibility Among Powered Mobility Device Users

    Science.gov (United States)

    Brandt, Åse; Lexell, Eva Månsson; Iwarsson, Susanne

    2015-01-01

    OBJECTIVE. To describe environmental barriers, accessibility problems, and powered mobility device (PMD) users’ autonomy indoors and outdoors; to determine the home environmental barriers that generated the most housing accessibility problems indoors, at entrances, and in the close exterior surroundings; and to examine personal factors and environmental components and their association with indoor and outdoor autonomy. METHOD. This cross-sectional study was based on data collected from a sample of 48 PMD users with a spinal cord injury (SCI) using the Impact of Participation and Autonomy and the Housing Enabler instruments. Descriptive statistics and logistic regression were used. RESULTS. More years living with SCI predicted less restriction in autonomy indoors, whereas more functional limitations and accessibility problems related to entrance doors predicted more restriction in autonomy outdoors. CONCLUSION. To enable optimized PMD use, practitioners must pay attention to the relationship between client autonomy and housing accessibility problems. PMID:26356666

  14. TOP500 Supercomputers for June 2005

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  15. TOP500 Supercomputers for November 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  16. How do users interact with photovoltaic-powered products? Investigating 100 'lead-users' and 6 PV products

    NARCIS (Netherlands)

    Apostolou, G.; Reinders, Angelina H.M.E.

    2016-01-01

    In order to better understand how 'lead-users' interact with PV-powered products, the behaviour of 100 people interacting with six different PV-powered products in their daily life was analysed. The sample of respondents to be observed consisted of 20 groups, each one formed by five students of

  17. TOP500 Supercomputers for November 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  18. Plasma turbulence calculations on supercomputers

    International Nuclear Information System (INIS)

    Carreras, B.A.; Charlton, L.A.; Dominguez, N.; Drake, J.B.; Garcia, L.; Leboeuf, J.N.; Lee, D.K.; Lynch, V.E.; Sidikman, K.

    1991-01-01

    Although the single-particle picture of magnetic confinement is helpful in understanding some basic physics of plasma confinement, it does not give a full description. Collective effects dominate plasma behavior. Any analysis of plasma confinement requires a self-consistent treatment of the particles and fields. The general picture is further complicated because the plasma, in general, is turbulent. The study of fluid turbulence is a rather complex field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples back to the fluid. Since the fluid is not a perfect conductor, this turbulence can lead to changes in the topology of the magnetic field structure, causing the magnetic field lines to wander radially. Because the plasma fluid flows along field lines, they carry the particles with them, and this enhances the losses caused by collisions. The changes in topology are critical for the plasma confinement. The study of plasma turbulence and the concomitant transport is a challenging problem. Because of the importance of solving the plasma turbulence problem for controlled thermonuclear research, the high complexity of the problem, and the necessity of attacking the problem with supercomputers, the study of plasma turbulence in magnetic confinement devices is a Grand Challenge problem

  19. Government policy on privatization of power in Nigeria: end-users ...

    African Journals Online (AJOL)

    Since the privatization of the power sector in Nigeria in 1999, the cost impact to end-users have not received adequate empirical evaluation. Using data from consumers of privatized electricity power in Enugu State, this paper sought to evaluate the significant effect of privatization on end-users of electricity product.

  20. TOP500 Supercomputers for June 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  1. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  2. Comparison of seating, powered characteristics and functions and costs of electrically powered wheelchairs in a general population of users.

    Science.gov (United States)

    Dolan, Michael John; Bolton, Megan Jennifer; Henderson, Graham Iain

    2017-10-26

    To profile and compare the seating and powered characteristics and functions of electrically powered wheelchairs (EPWs) in a general user population including equipment costs. Case notes of adult EPW users of a regional NHS service were reviewed retrospectively. Seating equipment complexity and type were categorized using the Edinburgh classification. Powered characteristics and functions, including control device type, were recorded. 482 cases were included; 53.9% female; mean duration EPW use 8.1 years (SD 7.4); rear wheel drive 88.0%; hand joystick 94.8%. Seating complexity: low 73.2%, medium 18.0%, high 8.7%. Most prevalent diagnoses: multiple sclerosis (MS) 25.3%, cerebral palsy (CP) 18.7%, muscular dystrophy (8.5%). Compared to CP users, MS users were significantly older at first use, less experienced, more likely to have mid-wheel drive and less complex seating. Additional costs for muscular dystrophy and spinal cord injury users were 3-4 times stroke users. This is the first large study of a general EPW user population using a seating classification. Significant differences were found between diagnostic groups; nevertheless, there was also high diversity within each group. The differences in provision and the equipment costs across diagnostic groups can be used to improve service planning. Implications for Rehabilitation At a service planning level, knowledge of a population's diagnostic group and age distribution can be used to inform decisions about the number of required EPWs and equipment costs. At a user level, purchasing decisions about powered characteristics and functions of EPWs and specialised seating equipment need to be taken on a case by case basis because of the diversity of users' needs within diagnostic groups. The additional equipment costs for SCI and MD users are several times those of stroke users and add between 60 and 70% of the cost of basic provision.

  3. The power of ground user in recommender systems.

    Directory of Open Access Journals (Sweden)

    Yanbo Zhou

    Full Text Available Accuracy and diversity are two important aspects to evaluate the performance of recommender systems. Two diffusion-based methods were proposed respectively inspired by the mass diffusion (MD and heat conduction (HC processes on networks. It has been pointed out that MD has high recommendation accuracy yet low diversity, while HC succeeds in seeking out novel or niche items but with relatively low accuracy. The accuracy-diversity dilemma is a long-term challenge in recommender systems. To solve this problem, we introduced a background temperature by adding a ground user who connects to all the items in the user-item bipartite network. Performing the HC algorithm on the network with ground user (GHC, it showed that the accuracy can be largely improved while keeping the diversity. Furthermore, we proposed a weighted form of the ground user (WGHC by assigning some weights to the newly added links between the ground user and the items. By turning the weight as a free parameter, an optimal value subject to the highest accuracy is obtained. Experimental results on three benchmark data sets showed that the WGHC outperforms the state-of-the-art method MD for both accuracy and diversity.

  4. The power of ground user in recommender systems.

    Science.gov (United States)

    Zhou, Yanbo; Lü, Linyuan; Liu, Weiping; Zhang, Jianlin

    2013-01-01

    Accuracy and diversity are two important aspects to evaluate the performance of recommender systems. Two diffusion-based methods were proposed respectively inspired by the mass diffusion (MD) and heat conduction (HC) processes on networks. It has been pointed out that MD has high recommendation accuracy yet low diversity, while HC succeeds in seeking out novel or niche items but with relatively low accuracy. The accuracy-diversity dilemma is a long-term challenge in recommender systems. To solve this problem, we introduced a background temperature by adding a ground user who connects to all the items in the user-item bipartite network. Performing the HC algorithm on the network with ground user (GHC), it showed that the accuracy can be largely improved while keeping the diversity. Furthermore, we proposed a weighted form of the ground user (WGHC) by assigning some weights to the newly added links between the ground user and the items. By turning the weight as a free parameter, an optimal value subject to the highest accuracy is obtained. Experimental results on three benchmark data sets showed that the WGHC outperforms the state-of-the-art method MD for both accuracy and diversity.

  5. MEGADOCK 4.0: an ultra-high-performance protein-protein docking software for heterogeneous supercomputers.

    Science.gov (United States)

    Ohue, Masahito; Shimoda, Takehiro; Suzuki, Shuji; Matsuzaki, Yuri; Ishida, Takashi; Akiyama, Yutaka

    2014-11-15

    The application of protein-protein docking in large-scale interactome analysis is a major challenge in structural bioinformatics and requires huge computing resources. In this work, we present MEGADOCK 4.0, an FFT-based docking software that makes extensive use of recent heterogeneous supercomputers and shows powerful, scalable performance of >97% strong scaling. MEGADOCK 4.0 is written in C++ with OpenMPI and NVIDIA CUDA 5.0 (or later) and is freely available to all academic and non-profit users at: http://www.bi.cs.titech.ac.jp/megadock. akiyama@cs.titech.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  6. User guide to power management for PCs and monitors

    Energy Technology Data Exchange (ETDEWEB)

    Nordman, B.; Piette, M.A.; Kinney, K.; Webber, C. [Lawrence Berkeley National Lab., CA (United States). Environmental Energy Technologies Div.

    1997-01-01

    Power management of personal computers (PCs) and monitors has the potential to save significant amounts of electricity as well as deliver other economic and environmental benefits. The Environmental Protection Agency`s ENERGY STAR{reg_sign} program has transformed the PC market so that equipment capable of power management is now widely available. However, previous studies have found that many Energy Star compliant computer systems are not accomplishing energy savings. The principal reasons for this are systems not being enabled for power management or a circumstance that prevents power management from operating. This guide is intended to provide information to computer support workers to increase the portion of systems that successfully power manage. The guide introduces power management concepts and the variety of benefits that power management can bring. It then explains how the parts of a computer system work together to enter and leave power management states. Several common computer system types are addressed, as well as the complications that networks bring to power management. Detailed instructions for checking and configuring several system types are provided, along with trouble shooting advice. The guide concludes with a discussion of how to purchase Energy Star compliant systems and future directions for power management of PCs and related equipment.

  7. User interface design principles for the SSM/PMAD automated power system

    Science.gov (United States)

    Jakstas, Laura M.; Myers, Chris J.

    1991-01-01

    Martin Marietta has developed a user interface for the space station module power management and distribution (SSM/PMAD) automated power system testbed which provides human access to the functionality of the power system, as well as exemplifying current techniques in user interface design. The testbed user interface was designed to enable an engineer to operate the system easily without having significant knowledge of computer systems, as well as provide an environment in which the engineer can monitor and interact with the SSM/PMAD system hardware. The design of the interface supports a global view of the most important data from the various hardware and software components, as well as enabling the user to obtain additional or more detailed data when needed. The components and representations of the SSM/PMAD testbed user interface are examined. An engineer's interactions with the system are also described.

  8. User Context Aware Base Station Power Flow Model

    OpenAIRE

    Walsh, Barbara; Farrell, Ronan

    2005-01-01

    At present the testing of power amplifiers within base station transmitters is limited to testing at component level as opposed to testing at the system level. While the detection of catastrophic failure is possible, that of performance degradation is not. This paper proposes a base station model with respect to transmitter output power with the aim of introducing system level monitoring of the power amplifier behaviour within the base station. Our model reflects the expe...

  9. Adaptability of supercomputers to nuclear computations

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Ishiguro, Misako; Matsuura, Toshihiko.

    1983-01-01

    Recently in the field of scientific and technical calculation, the usefulness of supercomputers represented by CRAY-1 has been recognized, and they are utilized in various countries. The rapid computation of supercomputers is based on the function of vector computation. The authors investigated the adaptability to vector computation of about 40 typical atomic energy codes for the past six years. Based on the results of investigation, the adaptability of the function of vector computation that supercomputers have to atomic energy codes, the problem regarding the utilization and the future prospect are explained. The adaptability of individual calculation codes to vector computation is largely dependent on the algorithm and program structure used for the codes. The change to high speed by pipeline vector system, the investigation in the Japan Atomic Energy Research Institute and the results, and the examples of expressing the codes for atomic energy, environmental safety and nuclear fusion by vector are reported. The magnification of speed up for 40 examples was from 1.5 to 9.0. It can be said that the adaptability of supercomputers to atomic energy codes is fairly good. (Kako, I.)

  10. Interactive real-time nuclear plant simulations on a UNIX based supercomputer

    International Nuclear Information System (INIS)

    Behling, S.R.

    1990-01-01

    Interactive real-time nuclear plant simulations are critically important to train nuclear power plant engineers and operators. In addition, real-time simulations can be used to test the validity and timing of plant technical specifications and operational procedures. To accurately and confidently simulate a nuclear power plant transient in real-time, sufficient computer resources must be available. Since some important transients cannot be simulated using preprogrammed responses or non-physical models, commonly used simulation techniques may not be adequate. However, the power of a supercomputer allows one to accurately calculate the behavior of nuclear power plants even during very complex transients. Many of these transients can be calculated in real-time or quicker on the fastest supercomputers. The concept of running interactive real-time nuclear power plant transients on a supercomputer has been tested. This paper describes the architecture of the simulation program, the techniques used to establish real-time synchronization, and other issues related to the use of supercomputers in a new and potentially very important area. (author)

  11. User interface design principles for the SSM/PMAD automated power system

    International Nuclear Information System (INIS)

    Jakstas, L.M.; Myers, C.J.

    1991-01-01

    Computer-human interfaces are an integral part of developing software for spacecraft power systems. A well designed and efficient user interface enables an engineer to effectively operate the system, while it concurrently prevents the user from entering data which is beyond boundary conditions or performing operations which are out of context. A user interface should also be designed to ensure that the engineer easily obtains all useful and critical data for operating the system and is aware of all faults and states in the system. Martin Marietta, under contract to NASA George C. Marshall Space Flight Center, has developed a user interface for the Space Station Module Power Management and Distribution (SSM/PMAD) automated power system testbed which provides human access to the functionality of the power system, as well as exemplifying current techniques in user interface design. The testbed user interface was designed to enable an engineer to operate the system easily without having significant knowledge of computer systems, as well as provide an environment in which the engineer can monitor and interact with the SSM/PMAD system hardware. The design of the interface supports a global view of the most important data form the various hardware and software components, as well as enabling the user to obtain additional or more detailed data when needed. The components and representations of the SSM/PMAD testbed user interface are examined in this paper. An engineer's interactions with the system are also described

  12. End User Research in PowerMatching City II

    NARCIS (Netherlands)

    Wiekens, Carina; Beaulieu, Anne; de Wilde, Jaap; Scherpen, Jacquelien M. A.

    2016-01-01

    In PowerMatching City, the leading Dutch smart grid project, 40 households participated in a field laboratory designed for sustainable living. The participating households were equipped with various decentralized energy sources (PV and micro combined heat-power units), hybrid heat pumps, smart

  13. Power mobility with collision avoidance for older adults: user, caregiver, and prescriber perspectives.

    Science.gov (United States)

    Wang, Rosalie H; Korotchenko, Alexandra; Hurd Clarke, Laura; Mortenson, W Ben; Mihailidis, Alex

    2013-01-01

    Collision avoidance technology has the capacity to facilitate safer mobility among older power mobility users with physical, sensory, and cognitive impairments, thus enabling independence for more users. Little is known about consumers' perceptions of collision avoidance. This article draws on interviews (29 users, 5 caregivers, and 10 prescribers) to examine views on design and utilization of this technology. Data analysis identified three themes: "useful situations or contexts," "technology design issues and real-life application," and "appropriateness of collision avoidance technology for a variety of users." Findings support ongoing development of collision avoidance for older adult users. The majority of participants supported the technology and felt that it might benefit current users and users with visual impairments, but might be unsuitable for people with significant cognitive impairments. Some participants voiced concerns regarding the risk for injury with power mobility use and some identified situations where collision avoidance might be beneficial (driving backward, avoiding dynamic obstacles, negotiating outdoor barriers, and learning power mobility use). Design issues include the need for context awareness, reliability, and user interface specifications. User desire to maintain driving autonomy supports development of collaboratively controlled systems. This research lays the groundwork for future development by illustrating consumer requirements for this technology.

  14. Flair: A powerful but user friendly graphical interface for FLUKA

    International Nuclear Information System (INIS)

    Vlachoudis, V.

    2009-01-01

    FLAIR is an advanced user graphical interface for FLUKA, to enable the user to start and control FLUKA jobs completely from a GUI environment without the need for command-line interactions. It is written entirely with python and Tkinter allowing easier portability across various operating systems and great programming flexibility with focus to be used as an Application Programming Interface (API) for FLUKA. FLAIR is an integrated development environment (IDE) for FLUKA, it does not only provide means for the post processing of the output but a big emphasis has been set on the creation and checking of error free input files. It contains a fully featured editor for editing the input files in a human readable way with syntax highlighting, without hiding the inner functionality of FLUKA from the users. It provides also means for building the executable, debugging the geometry, running the code, monitoring the status of one or many runs, inspection of the output files, post processing of the binary files (data merging) and interface to plotting utilities like gnuplot and PovRay for high quality plots or photo-realistic images. The program includes also a database of selected properties of all known nuclides and their known isotopic composition as well a reference database of ∼ 300 predefined materials together with their Sterheimer parameters. (authors)

  15. PRIS-STATISTICS: Power Reactor Information System Statistical Reports. User's Manual

    International Nuclear Information System (INIS)

    2013-01-01

    The IAEA developed the Power Reactor Information System (PRIS)-Statistics application to assist PRIS end users with generating statistical reports from PRIS data. Statistical reports provide an overview of the status, specification and performance results of every nuclear power reactor in the world. This user's manual was prepared to facilitate the use of the PRIS-Statistics application and to provide guidelines and detailed information for each report in the application. Statistical reports support analyses of nuclear power development and strategies, and the evaluation of nuclear power plant performance. The PRIS database can be used for comprehensive trend analyses and benchmarking against best performers and industrial standards.

  16. An Optimal Joint User Association and Power Allocation Algorithm for Secrecy Information Transmission in Heterogeneous Networks

    Directory of Open Access Journals (Sweden)

    Rong Chai

    2017-01-01

    Full Text Available In recent years, heterogeneous radio access technologies have experienced rapid development and gradually achieved effective coordination and integration, resulting in heterogeneous networks (HetNets. In this paper, we consider the downlink secure transmission of HetNets where the information transmission from base stations (BSs to legitimate users is subject to the interception of eavesdroppers. In particular, we stress the problem of joint user association and power allocation of the BSs. To achieve data transmission in a secure and energy efficient manner, we introduce the concept of secrecy energy efficiency which is defined as the ratio of the secrecy transmission rate and power consumption of the BSs and formulate the problem of joint user association and power allocation as an optimization problem which maximizes the joint secrecy energy efficiency of all the BSs under the power constraint of the BSs and the minimum data rate constraint of user equipment (UE. By equivalently transforming the optimization problem into two subproblems, that is, power allocation subproblem and user association subproblem of the BSs, and applying iterative method and Kuhn-Munkres (K-M algorithm to solve the two subproblems, respectively, the optimal user association and power allocation strategies can be obtained. Numerical results demonstrate that the proposed algorithm outperforms previously proposed algorithms.

  17. SIMULASI TEKNIK POWER CONTROL DAN MULTI USER DETECTION PADA SISTEM KOMUNIKASI DS-CDMA

    Directory of Open Access Journals (Sweden)

    Yuli Christyono

    2012-02-01

    Full Text Available CDMA is interference limited multiple access system. Because all users transmit on the same frequency,internal interference generated by the system is the most significant factor in determining system capacity andcall quality. The transmit power for each user must be reduced to limit interference, however, the power shouldbe enough to maintain the required Eb/No (signal to noise ratio for a satisfactory call quality. Maximumcapacity is achieved when Eb/No of every user is at the minimum level needed for the acceptable channelperformance. As the MS moves around, the RF environment continuously changes due to fast and slow fading,external interference, shadowing , and other factors. The aim of the dynamic power control is to limittransmitted power on both the links while maintaining link quality under all conditions. Additional advantagesare longer mobile battery life and longer life span of BTS power amplifiers.In this research will be made a sumulation of power control and multi user detection to avoid the interferencebetween MS.Observations show that the increasing number of users will decrease the value of Signal to Interfrence Ratio(SIR / SIR below the target. To cope the growing number of users increases can be done by iteration / updatingpower transmit so the convergence computation can be reached and target value SIR can be achieved. Inaddition, to reduce interference can also be done by extending the number of chips.

  18. User-friendly Tool for Power Flow Analysis and Distributed ...

    African Journals Online (AJOL)

    Akorede

    AKOREDE et al: TOOL FOR POWER FLOW ANALYSIS AND DISTRIBUTED GENERATION OPTIMISATION. 23 ... greenhouse gas emissions and the current deregulation of electric energy ..... Visual composition and temporal behaviour of GUI.

  19. Prediction of Wind Energy Resources (PoWER) Users Guide

    Science.gov (United States)

    2016-01-01

    ARL-TR-7573● JAN 2016 US Army Research Laboratory Prediction of Wind Energy Resources (PoWER) User’s Guide by David P Sauter...manufacturer’s or trade names does not constitute an official endorsement or approval of the use thereof. Destroy this report when it is no longer needed. Do...not return it to the originator. ARL-TR-7573 ● JAN 2016 US Army Research Laboratory Prediction of Wind Energy Resources (PoWER

  20. Supercomputer requirements for theoretical chemistry

    International Nuclear Information System (INIS)

    Walker, R.B.; Hay, P.J.; Galbraith, H.W.

    1980-01-01

    Many problems important to the theoretical chemist would, if implemented in their full complexity, strain the capabilities of today's most powerful computers. Several such problems are now being implemented on the CRAY-1 computer at Los Alamos. Examples of these problems are taken from the fields of molecular electronic structure calculations, quantum reactive scattering calculations, and quantum optics. 12 figures

  1. Mobile user forecast and power-law acceleration invariance of scale-free networks

    International Nuclear Information System (INIS)

    Guo Jin-Li; Guo Zhao-Hua; Liu Xue-Jiao

    2011-01-01

    This paper studies and predicts the number growth of China's mobile users by using the power-law regression. We find that the number growth of the mobile users follows a power law. Motivated by the data on the evolution of the mobile users, we consider scenarios of self-organization of accelerating growth networks into scale-free structures and propose a directed network model, in which the nodes grow following a power-law acceleration. The expressions for the transient and the stationary average degree distributions are obtained by using the Poisson process. This result shows that the model generates appropriate power-law connectivity distributions. Therefore, we find a power-law acceleration invariance of the scale-free networks. The numerical simulations of the models agree with the analytical results well. (interdisciplinary physics and related areas of science and technology)

  2. Mac OS X Snow Leopard for Power Users Advanced Capabilities and Techniques

    CERN Document Server

    Granneman, Scott

    2010-01-01

    Mac OS X Snow Leopard for Power Users: Advanced Capabilities and Techniques is for Mac OS X users who want to go beyond the obvious, the standard, and the easy. If want to dig deeper into Mac OS X and maximize your skills and productivity using the world's slickest and most elegant operating system, then this is the book for you. Written by Scott Granneman, an experienced teacher, developer, and consultant, Mac OS X for Power Users helps you push Mac OS X to the max, unveiling advanced techniques and options that you may have not known even existed. Create custom workflows and apps with Automa

  3. User's manual for levelized power generation cost using a microcomputer

    International Nuclear Information System (INIS)

    Fuller, L.C.

    1984-08-01

    Microcomputer programs for the estimation of levelized electrical power generation costs are described. Procedures for light-water reactor plants and coal-fired plants include capital investment cost, operation and maintenance cost, fuel cycle cost, nuclear decommissioning cost, and levelized total generation cost. Programs are written in Pascal and are run on an Apple II Plus microcomputer

  4. A flexible energy market - electric power reserves at the place of the end-user

    International Nuclear Information System (INIS)

    Livik, Klaus; Groenli, Helle; Fretheim, Stig; Feilberg, Nicolai; Gjervan, Sverre; Dymbe, Lars

    1996-01-01

    If short-term as well as long-term prices in the electric power market are to be stable, then the market must be flexible and both energy and power related efforts must be taken at the end-user's place. The report analyses what type of actors in the electric power market are profiting from a flexible end-user market for energy and power. It is found that a flexible market has utilitarian value for end-users, network owners, suppliers, market operators and system operators. It is also clearly desirable from the point of view of the authorities. Considerable possibilities of developing the flexible energy and power market exist at the end-user. The possibilities can be realized by means of technological developments and political incentives such as by rules for monopoly control. End-user measures may have considerable utilitarian value, socio economically. Especially in areas with high marginal costs of power transport will network owners see the advantages of such measures

  5. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-05-15

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation.

  6. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    International Nuclear Information System (INIS)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho

    2008-01-01

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation

  7. Plane-wave electronic structure calculations on a parallel supercomputer

    International Nuclear Information System (INIS)

    Nelson, J.S.; Plimpton, S.J.; Sears, M.P.

    1993-01-01

    The development of iterative solutions of Schrodinger's equation in a plane-wave (pw) basis over the last several years has coincided with great advances in the computational power available for performing the calculations. These dual developments have enabled many new and interesting condensed matter phenomena to be studied from a first-principles approach. The authors present a detailed description of the implementation on a parallel supercomputer (hypercube) of the first-order equation-of-motion solution to Schrodinger's equation, using plane-wave basis functions and ab initio separable pseudopotentials. By distributing the plane-waves across the processors of the hypercube many of the computations can be performed in parallel, resulting in decreases in the overall computation time relative to conventional vector supercomputers. This partitioning also provides ample memory for large Fast Fourier Transform (FFT) meshes and the storage of plane-wave coefficients for many hundreds of energy bands. The usefulness of the parallel techniques is demonstrated by benchmark timings for both the FFT's and iterations of the self-consistent solution of Schrodinger's equation for different sized Si unit cells of up to 512 atoms

  8. Graphics supercomputer for computational fluid dynamics research

    Science.gov (United States)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  9. Users' guide for a personal-computer-based nuclear power plant fire data base

    International Nuclear Information System (INIS)

    Wheelis, W.T.

    1986-08-01

    The Nuclear Power Plant Fire Data Base has been developed for use with an IBM XT (or with a compatible system). Nuclear power plant fire data is located in many diverse references, making it both costly and time-consuming to obtain. The purpose of this Fire Data Base is to collect and to make easily accessible nuclear power plant fire data. This users' guide discusses in depth the specific features and capabilities of the various options found in the data base. Capabilities include the ability to search several database fields simultaneously to meet user-defined conditions, display basic plant information, and determine the operating experience (in years) for several nuclear power plant locations. Step-by-step examples are included for each option to allow the user to learn how to access the data

  10. FPS scientific and supercomputers computers in chemistry

    International Nuclear Information System (INIS)

    Curington, I.J.

    1987-01-01

    FPS Array Processors, scientific computers, and highly parallel supercomputers are used in nearly all aspects of compute-intensive computational chemistry. A survey is made of work utilizing this equipment, both published and current research. The relationship of the computer architecture to computational chemistry is discussed, with specific reference to Molecular Dynamics, Quantum Monte Carlo simulations, and Molecular Graphics applications. Recent installations of the FPS T-Series are highlighted, and examples of Molecular Graphics programs running on the FPS-5000 are shown

  11. Problem solving in nuclear engineering using supercomputers

    International Nuclear Information System (INIS)

    Schmidt, F.; Scheuermann, W.; Schatz, A.

    1987-01-01

    The availability of supercomputers enables the engineer to formulate new strategies for problem solving. One such strategy is the Integrated Planning and Simulation System (IPSS). With the integrated systems, simulation models with greater consistency and good agreement with actual plant data can be effectively realized. In the present work some of the basic ideas of IPSS are described as well as some of the conditions necessary to build such systems. Hardware and software characteristics as realized are outlined. (orig.) [de

  12. Explaining the gap between theoretical peak performance and real performance for supercomputer architectures

    International Nuclear Information System (INIS)

    Schoenauer, W.; Haefner, H.

    1993-01-01

    The basic architectures of vector and parallel computers with their properties are presented. Then the memory size and the arithmetic operations in the context of memory bandwidth are discussed. For the exemplary discussion of a single operation micro-measurements of the vector triad for the IBM 3090 VF and the CRAY Y-MP/8 are presented. They reveal the details of the losses for a single operation. Then we analyze the global performance of a whole supercomputer by identifying reduction factors that bring down the theoretical peak performance to the poor real performance. The responsibilities of the manufacturer and of the user for these losses are dicussed. Then the price-performance ratio for different architectures in a snapshot of January 1991 is briefly mentioned. Finally some remarks to a user-friendly architecture for a supercomputer will be made. (orig.)

  13. Assessing electronic cigarette effects and regulatory impact: Challenges with user self-reported device power.

    Science.gov (United States)

    Rudy, Alyssa K; Leventhal, Adam M; Goldenson, Nicholas I; Eissenberg, Thomas

    2017-10-01

    Electronic cigarettes (ECIGs) aerosolize liquids for user inhalation that usually contain nicotine. ECIG nicotine emission is determined, in part, by user behavior, liquid nicotine concentration, and electrical power. Whether users are able to report accurately nicotine concentration and device electrical power has not been evaluated. This study's purpose was to examine if ECIG users could provide data relevant to understanding ECIG nicotine emission, particularly liquid nicotine concentration (mg/ml) as well as battery voltage (V) and heater resistance (ohms, Ω) - needed to calculate power (watts, W). Adult ECIG users (N=165) were recruited from Los Angeles, CA for research studies examining the effects of ECIG use. We asked all participants who visited the laboratory to report liquid nicotine concentration, V, and Ω. Liquid nicotine concentration was reported by 89.7% (mean=9.5mg/ml, SD=7.3), and responses were consistent with the distribution of liquids available in commonly marketed products. The majority could not report voltage (51.5%) or resistance (63.6%). Of the 40 participants (24.8%) who reported voltage and resistance, there was a substantial power range (2.2-32,670W) the upper limit of which exceeds that of the highest ECIG reported by any user to our knowledge (i.e., 2512W). If 2512W is taken as the upper limit, only 30 (18.2%) reported valid results (mean 237.3W, SD=370.6; range=2.2-1705.3W). Laboratory, survey, and other researchers interested in understanding ECIG effects to inform users and policymakers may need to use methods other than user self-report to obtain information regarding device power. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    International Nuclear Information System (INIS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-01-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers. (paper)

  15. User manual for PACTOLUS: a code for computing power costs

    International Nuclear Information System (INIS)

    Huber, H.D.; Bloomster, C.H.

    1979-02-01

    PACTOLUS is a computer code for calculating the cost of generating electricity. Through appropriate definition of the input data, PACTOLUS can calculate the cost of generating electricity from a wide variety of power plants, including nuclear, fossil, geothermal, solar, and other types of advanced energy systems. The purpose of PACTOLUS is to develop cash flows and calculate the unit busbar power cost (mills/kWh) over the entire life of a power plant. The cash flow information is calculated by two principal models: the Fuel Model and the Discounted Cash Flow Model. The Fuel Model is an engineering cost model which calculates the cash flow for the fuel cycle costs over the project lifetime based on input data defining the fuel material requirements, the unit costs of fuel materials and processes, the process lead and lag times, and the schedule of the capacity factor for the plant. For nuclear plants, the Fuel Model calculates the cash flow for the entire nuclear fuel cycle. For fossil plants, the Fuel Model calculates the cash flow for the fossil fuel purchases. The Discounted Cash Flow Model combines the fuel costs generated by the Fuel Model with input data on the capital costs, capital structure, licensing time, construction time, rates of return on capital, tax rates, operating costs, and depreciation method of the plant to calculate the cash flow for the entire lifetime of the project. The financial and tax structure for both investor-owned utilities and municipal utilities can be simulated through varying the rates of return on equity and debt, the debt-equity ratios, and tax rates. The Discounted Cash Flow Model uses the principal that the present worth of the revenues will be equal to the present worth of the expenses including the return on investment over the economic life of the project. This manual explains how to prepare the input data, execute cases, and interpret the output results with the updated version of PACTOLUS. 11 figures, 2 tables

  16. User manual for PACTOLUS: a code for computing power costs.

    Energy Technology Data Exchange (ETDEWEB)

    Huber, H.D.; Bloomster, C.H.

    1979-02-01

    PACTOLUS is a computer code for calculating the cost of generating electricity. Through appropriate definition of the input data, PACTOLUS can calculate the cost of generating electricity from a wide variety of power plants, including nuclear, fossil, geothermal, solar, and other types of advanced energy systems. The purpose of PACTOLUS is to develop cash flows and calculate the unit busbar power cost (mills/kWh) over the entire life of a power plant. The cash flow information is calculated by two principal models: the Fuel Model and the Discounted Cash Flow Model. The Fuel Model is an engineering cost model which calculates the cash flow for the fuel cycle costs over the project lifetime based on input data defining the fuel material requirements, the unit costs of fuel materials and processes, the process lead and lag times, and the schedule of the capacity factor for the plant. For nuclear plants, the Fuel Model calculates the cash flow for the entire nuclear fuel cycle. For fossil plants, the Fuel Model calculates the cash flow for the fossil fuel purchases. The Discounted Cash Flow Model combines the fuel costs generated by the Fuel Model with input data on the capital costs, capital structure, licensing time, construction time, rates of return on capital, tax rates, operating costs, and depreciation method of the plant to calculate the cash flow for the entire lifetime of the project. The financial and tax structure for both investor-owned utilities and municipal utilities can be simulated through varying the rates of return on equity and debt, the debt-equity ratios, and tax rates. The Discounted Cash Flow Model uses the principal that the present worth of the revenues will be equal to the present worth of the expenses including the return on investment over the economic life of the project. This manual explains how to prepare the input data, execute cases, and interpret the output results. (RWR)

  17. CONCEPT-5 user's manual. [Power plant costs

    Energy Technology Data Exchange (ETDEWEB)

    Hudson, C.R. II

    1979-01-01

    The CONCEPT computer code package was developed to provide conceptual capital cost estimates for nuclear-fueled and fossil-fired power plants. Cost estimates can be made as a function of plant type, size, location, and date of initial operation. The output includes a detailed breakdown of the estimate into direct and indirect costs similar to the accounting system described in document NUS--531. Cost models are currently provided in CONCEPT 5 for single- and multiunit pressurized-water reactors, boiling-water reactors, and cost-fired plants with and without flue gas desulfurization equipment.

  18. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  19. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  20. Summaries of research and development activities by using supercomputer system of JAEA in FY2011. April 1, 2011 - March 31, 2012

    International Nuclear Information System (INIS)

    2013-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As more than 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2011, the system was used for analyses of the accident at the Fukushima Daiichi Nuclear Power Station and establishment of radioactive decontamination plan, as well as the JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great amount of R and D results accomplished by using the system in FY2011, as well as user support structure, operational records and overviews of the system, and so on. (author)

  1. QoE-based transmission strategies for multi-user wireless information and power transfer

    Directory of Open Access Journals (Sweden)

    Taehun Jung

    2015-12-01

    Full Text Available One solution to the problem of supplying energy to wireless networks is wireless power transfer. One such technology–electromagnetic radiation enabled wireless power transfer–will change traditional wireless networks. In this paper, we investigate a transmission strategy for multi-user wireless information and power transfer. We consider a multi-user multiple-input multiple-output (MIMO channel that includes one base station (BS and two user terminals (UT consisting of one energy harvesting (EH receiver and one information decoding (ID receiver. Our system provides transmission strategies that can be executed and implemented in practical scenarios. The paper then analyzes the rate–energy (R–E pair of our strategies and compares them to those of the theoretical optimal strategy. We furthermore propose a QoE-based mode selection algorithm by mapping the R–E pair to the utility functions.

  2. Achievable data rate in spectrum-sharing channels with variable-rate variable-power primary users

    KAUST Repository

    Yang, Yuli; Aï ssa, Sonia

    2012-01-01

    In this work, we propose a transmission strategy for secondary users (SUs) within a cognitive radio network where primary users (PUs) exploit variable-rate variable-power modulation. By monitoring the PU's transmissions, the SU adjusts its transmit

  3. Information retrieval system of nuclear power plant database (PPD) user's guide

    International Nuclear Information System (INIS)

    Izumi, Fumio; Horikami, Kunihiko; Kobayashi, Kensuke.

    1990-12-01

    A nuclear power plant database (PPD) and its retrieval system have been developed. The database involves a large number of safety design data of nuclear power plants, operating and planned in Japan. The information stored in the database can be retrieved at high speed, whenever they are needed, by use of the retrieval system. The report is a user's manual of the system to access the database utilizing a display unit of the JAERI computer network system. (author)

  4. System and method for controlling power consumption in a computer system based on user satisfaction

    Science.gov (United States)

    Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok

    2014-04-22

    Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.

  5. Development and user validation of driving tasks for a power wheelchair simulator.

    Science.gov (United States)

    Archambault, Philippe S; Blackburn, Émilie; Reid, Denise; Routhier, François; Miller, William C

    2017-07-01

    Mobility is important for participation in daily activities and a power wheelchair (PW) can improve quality of life of individuals with mobility impairments. A virtual reality simulator may be helpful in complementing PW skills training, which is generally seen as insufficient by both clinicians and PW users. To this end, specific, ecologically valid activities, such as entering an elevator and navigating through a shopping mall crowd, have been added to the McGill wheelchair (miWe) simulator through a user-centred approach. The objective of this study was to validate the choice of simulated activities in a group of newly trained PW users. We recruited 17 new PW users, who practiced with the miWe simulator at home for two weeks. They then related their experience through the Short Feedback Questionnaire, the perceived Ease of Use Questionnaire, and semi-structured interviews. Participants in general greatly appreciated their experience with the simulator. During the interviews, this group made similar comments about the activities as our previous group of expert PW users had done. They also insisted on the importance of realism in the miWe activities, for their use in training. A PW simulator may be helpful if it supports the practice of activities in specific contexts (such as a bathroom or supermarket), to complement the basic skills training received in the clinic (such as driving forward, backward, turning, and avoiding obstacles). Implications for Rehabilitation New power wheelchair users appreciate practicing on a virtual reality simulator and find the experience useful when the simulated diving activities are realistic and ecologically valid. User-centred development can lead to simulated power wheelchair activities that adequately capture everyday driving challenges experienced in various environmental contexts.

  6. A need for a more user-centered design in body powered prostheses

    NARCIS (Netherlands)

    Hichert, M.; Plettenburg, D.H.; Vardy, A.N.; Will, Wendy; Scheme, Erik

    2014-01-01

    Users of body powered prostheses (BPP) complain about too high operating forces, leading to pain and/or fatigue during or after prosthetic operation. In the worst case nerve and vessel damage can occur [1, 2], leading to nonuse of prostheses. Smit et al. investigated cable forces and displacements

  7. YouPower : An open source platform for community-oriented smart grid user engagement

    NARCIS (Netherlands)

    Huang, Yilin; Hasselqvist, Hanna; Poderi, Giacomo; Scepanovic, S.; Kis, F.; Bogdan, Cristian; Warnier, Martijn; Brazier, F.M.

    2017-01-01

    This paper presents YouPower, an open source platform designed to make people more aware of their energy consumption and encourage sustainable consumption with local communities. The platform is designed iteratively in collaboration with users in the Swedish and Italian test sites of the project

  8. Optimal uplink power control for dual connected users in LTE heterogeneous networks

    DEFF Research Database (Denmark)

    Popovska Avramova, Andrijana; Wang, Hua; Dittmann, Lars

    2016-01-01

    In Dual Connectivity (DC), a User Equipment (UE) can be configured with two radio access nodes in order to aggregate the available resources at both nodes. As with dual connectivity each node has independent radio resource manage- ment, the maximum power allocation at the UE can be easily exceede...

  9. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  10. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    Science.gov (United States)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  11. PNNL supercomputer to become largest computing resource on the Grid

    CERN Multimedia

    2002-01-01

    Hewlett Packard announced that the US DOE Pacific Northwest National Laboratory will connect a 9.3-teraflop HP supercomputer to the DOE Science Grid. This will be the largest supercomputer attached to a computer grid anywhere in the world (1 page).

  12. A new paradigm on battery powered embedded system design based on User-Experience-Oriented method

    International Nuclear Information System (INIS)

    Wang, Zhuoran; Wu, Yue

    2014-01-01

    The battery sustainable time has been an active research topic recently for the development of battery powered embedded products such as tablets and smart phones, which are determined by the battery capacity and power consumption. Despite numerous efforts on the improvement of battery capacity in the field of material engineering, the power consumption also plays an important role and easier to ameliorate in delivering a desirable user-experience, especially considering the moderate advancement on batteries for decades. In this study, a new Top-Down modelling method, User-Experience-Oriented Battery Powered Embedded System Design Paradigm, is proposed to estimate the target average power consumption, to guide the hardware and software design, and eventually to approach the theoretical lowest power consumption that the application is still able to provide the full functionality. Starting from the 10-hour sustainable time standard, average working current is defined with battery design capacity and set as a target. Then an implementation is illustrated from both hardware perspective, which is summarized as Auto-Gating power management, and from software perspective, which introduces a new algorithm, SleepVote, to guide the system task design and scheduling

  13. A new paradigm on battery powered embedded system design based on User-Experience-Oriented method

    Science.gov (United States)

    Wang, Zhuoran; Wu, Yue

    2014-03-01

    The battery sustainable time has been an active research topic recently for the development of battery powered embedded products such as tablets and smart phones, which are determined by the battery capacity and power consumption. Despite numerous efforts on the improvement of battery capacity in the field of material engineering, the power consumption also plays an important role and easier to ameliorate in delivering a desirable user-experience, especially considering the moderate advancement on batteries for decades. In this study, a new Top-Down modelling method, User-Experience-Oriented Battery Powered Embedded System Design Paradigm, is proposed to estimate the target average power consumption, to guide the hardware and software design, and eventually to approach the theoretical lowest power consumption that the application is still able to provide the full functionality. Starting from the 10-hour sustainable time standard, average working current is defined with battery design capacity and set as a target. Then an implementation is illustrated from both hardware perspective, which is summarized as Auto-Gating power management, and from software perspective, which introduces a new algorithm, SleepVote, to guide the system task design and scheduling.

  14. Supercomputing - Use Cases, Advances, The Future (2/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the second day, we will focus on software and software paradigms driving supercomputers, workloads that need supercomputing treatment, advances in technology and possible future developments. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and i...

  15. Summaries of research and development activities by using supercomputer system of JAEA in FY2015. April 1, 2015 - March 31, 2016

    International Nuclear Information System (INIS)

    2017-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As shown in the fact that about 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology. In FY2015, the system was used for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue, as well as for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great number of R and D results accomplished by using the system in FY2015, as well as user support, operational records and overviews of the system, and so on. (author)

  16. Summaries of research and development activities by using supercomputer system of JAEA in FY2014. April 1, 2014 - March 31, 2015

    International Nuclear Information System (INIS)

    2016-02-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As shown in the fact that about 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology. In FY2014, the system was used for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue, as well as for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great number of R and D results accomplished by using the system in FY2014, as well as user support, operational records and overviews of the system, and so on. (author)

  17. Summaries of research and development activities by using supercomputer system of JAEA in FY2013. April 1, 2013 - March 31, 2014

    International Nuclear Information System (INIS)

    2015-02-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. About 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2013, the system was used not only for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science, but also for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue. This report presents a great amount of R and D results accomplished by using the system in FY2013, as well as user support, operational records and overviews of the system, and so on. (author)

  18. Summaries of research and development activities by using supercomputer system of JAEA in FY2012. April 1, 2012 - March 31, 2013

    International Nuclear Information System (INIS)

    2014-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As more than 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2012, the system was used not only for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science, but also for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as apriority issue. This report presents a great amount of R and D results accomplished by using the system in FY2012, as well as user support, operational records and overviews of the system, and so on. (author)

  19. Psychometric properties of the NOMO 1.0 tested among adult powered-mobility users

    DEFF Research Database (Denmark)

    Sund, Terje; Brandt, Åse; Anttila, Heidi

    2017-01-01

    (Participation Repertoire). PURPOSE: This study aimed to investigate a range of psychometric properties of the NOMO 1.0 in a sample of adult powered mobility device (PMD) users. METHOD: Data collected from PMD users ( N = 248) in Denmark, Finland, and Norway as part of a larger study were analyzed using state...... scale and six components of the Frequency scale. IMPLICATIONS: The NOMO 1.0 should be used for research purposes and not for clinical practice. Better reliability should be established for the Need for Assistance and Ease/Difficulty scales prior to further psychometric testing to establish the validity...

  20. Gingival abrasion and recession in manual and oscillating-rotating power brush users.

    Science.gov (United States)

    Rosema, N A M; Adam, R; Grender, J M; Van der Sluijs, E; Supranoto, S C; Van der Weijden, G A

    2014-11-01

    To assess gingival recession (GR) in manual and power toothbrush users and evaluate the relationship between GR and gingival abrasion scores (GA). This was an observational (cross-sectional), single-centre, examiner-blind study involving a single-brushing exercise, with 181 young adult participants: 90 manual brush users and 91 oscillating-rotating power brush users. Participants were assessed for GR and GA as primary response variables. Secondary response variables were the level of gingival inflammation, plaque score reduction and brushing duration. Pearson correlation was used to describe the relationship between number of recession sites and number of abrasions. Prebrushing (baseline) and post-brushing GA and plaque scores were assessed and differences analysed using paired tests. Two-sample t-test was used to analyse group differences; ancova was used for analyses of post-brushing changes with baseline as covariate. Overall, 97.8% of the study population had at least one site of ≥1 mm of gingival recession. For the manual group, this percentage was 98.9%, and for the power group, this percentage was 96.7% (P = 0.621). Post-brushing, the power group showed a significantly smaller GA increase than the manual group (P = 0.004); however, there was no significant correlation between number of recession sites and number of abrasions for either group (P ≥ 0.327). Little gingival recession was observed in either toothbrush user group; the observed GR levels were comparable. Lower post-brushing gingival abrasion levels were seen in the power group. There was no correlation between gingival abrasion as a result of brushing and the observed gingival recession following use of either toothbrush. © 2014 The Authors International Journal of Dental Hygiene Published by John Wiley & Sons Ltd.

  1. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen-Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2018-05-15

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time and supports DMA functionality allowing for parallel processing message-passing.

  2. Energy Systems Test Area (ESTA) Electrical Power Systems Test Operations: User Test Planning Guide

    Science.gov (United States)

    Salinas, Michael J.

    2012-01-01

    Test process, milestones and inputs are unknowns to first-time users of the ESTA Electrical Power Systems Test Laboratory. The User Test Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their test engineering personnel in test planning and execution. Material covered includes a roadmap of the test process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, test article interfaces, and inputs necessary to define test scope, cost, and schedule are included as an appendix to the guide.

  3. Strategies of social welfare users: a relational readi ng concerning power relations

    Directory of Open Access Journals (Sweden)

    Silvana Aparecida Mariano

    2013-01-01

    Full Text Available This article addresses the analysis of power relations within the operation of social welfare policy, based on a case study conducted in Londrina, Paraná. Our analysis adopts a relational perspective on the confi guration of power among the beneficiaries of the policy and the social workers responsible for implementing state actions to fi ght poverty. Basically what we found were areas of dissonance between conceptions and perceptions of users and social workers, so that an unreadable universe is created for users on social welfare and, thus, possible ways to consolidate the transfer of income are blocked as a right to citizenship. We focus on actions related to the transfer of income because of its relevance to the Brazilian social welfare today.

  4. The power and the pain: Mammographic compression research from the service-users' perspective

    International Nuclear Information System (INIS)

    Robinson, Leslie; Hogg, Peter; Newton-Hughes, Ann

    2013-01-01

    Purpose: to explore the value service-users can add to our understanding of inter-practitioner compression variability in mammography. Imaging of the breast for the screening and detection of breast carcinoma is generally carried out by mammographic examination the technique for which includes compression of the breast. Evolving research calls into question compression practice in terms of practitioner consistency thus raising the possibility that strong compression may not be required. We were interested to know whether this was important to service-users and if such knowledge might influence their behaviour. Methods: and sample: A qualitative study involving 3 focus groups interviews (n = 4, 6 and 5). Participants were first asked to reflect on their own experiences of breast compression within the context of a breast screening examination, then interpret the results of the evolving research detailed above. We then explored whether these participants might behave differently during future mammography in light being appraised of these research findings. Results: A grounded approach was used to analyse the data into themes. The two overarching themes were i) Service-User Empowerment, which illustrates the difficulties participants believe women would encounter in exercising power in the breast screening mammographic examination; and ii) Service User Experience of Mammography, which unearthed unanticipated aspects of the examination, other than compression, that contribute to pain and discomfort and which therefore need investigation. Conclusion: Involving service-users more collaboratively in research can help investigators understand the impact of their work and highlight patient-relevant areas for further investigation

  5. NASA's Radioisotope Power Systems Program Overview - A Focus on RPS Users

    Science.gov (United States)

    Hamley, John A.; McCallum, Peter W.; Sandifer, Carl E., II; Sutliff, Thomas J.; Zakrajsek, June F.

    2016-01-01

    The goal of NASA's Radioisotope Power Systems (RPS) Program is to make RPS ready and available to support the exploration of the solar system in environments where the use of conventional solar or chemical power generation is impractical or impossible to meet potential future mission needs. To meet this goal, the RPS Program manages investments in RPS technologies and RPS system development, working closely with the Department of Energy. This paper provides an overview of the RPS Program content and status, its collaborations with potential RPS users, and the approach employed to maintain the readiness of RPS to support future NASA mission concepts.

  6. Precoding Design and Power Allocation in Two-User MU-MIMO Wireless Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Haole Chen

    2017-10-01

    Full Text Available In this paper, we consider the precoding design and power allocation problem for multi-user multiple-input multiple-output (MU-MIMO wireless ad hoc networks. In the first timeslot, the source node (SN transmits energy and information to a relay node (RN simultaneously within the simultaneous wireless information and power transfer (SWIPT framework. Then, in the second timeslot, based on the decoder and the forwarding (DF protocol, after reassembling the received signal and its own signal, the RN forwards the information to the main user (U1 and simultaneously sends its own information to the secondary user (U2. In this paper, when the transmission rate of the U1 is restricted, the precoding, beamforming, and power splitting (PS transmission ratio are jointly considered to maximize the transmission rate of U2. To maximize the system rate, we design an optimal beamforming matrix and solve the optimization problem by semi-definite relaxation (SDR, considering the high complexity of implementing the optimal solution. Two sub-optimal precoding programs are also discussed: singular value decomposition and block diagonalization. Finally, the performance of the optimization and sub-optimization schemes are compared using a simulation.

  7. OpenMP Performance on the Columbia Supercomputer

    Science.gov (United States)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  8. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  9. PLEXFIN a computer model for the economic assessment of nuclear power plant life extension. User's manual

    International Nuclear Information System (INIS)

    2007-01-01

    The IAEA developed PLEXFIN, a computer model analysis tool aimed to assist decision makers in the assessment of the economic viability of a nuclear power plant life/licence extension. This user's manual was produced to facilitate the application of the PLEXFIN computer model. It is widely accepted in the industry that the operational life of a nuclear power plant is not limited to a pre-determined number of years, sometimes established on non-technical grounds, but by the capability of the plant to comply with the nuclear safety and technical requirements in a cost effective manner. The decision to extend the license/life of a nuclear power plant involves a number of political, technical and economic issues. The economic viability is a cornerstone of the decision-making process. In a liberalized electricity market, the economics to justify a nuclear power plant life/license extension decision requires a more complex evaluation. This user's manual was elaborated in the framework of the IAEA's programmes on Continuous process improvement of NPP operating performance, and on Models for analysis and capacity building for sustainable energy development, with the support of four consultants meetings

  10. Nuclear power plant control room crew task analysis database: SEEK system. Users manual

    International Nuclear Information System (INIS)

    Burgy, D.; Schroeder, L.

    1984-05-01

    The Crew Task Analysis SEEK Users Manual was prepared for the Office of Nuclear Regulatory Research of the US Nuclear Regulatory Commission. It is designed for use with the existing computerized Control Room Crew Task Analysis Database. The SEEK system consists of a PR1ME computer with its associated peripherals and software augmented by General Physics Corporation SEEK database management software. The SEEK software programs provide the Crew Task Database user with rapid access to any number of records desired. The software uses English-like sentences to allow the user to construct logical sorts and outputs of the task data. Given the multiple-associative nature of the database, users can directly access the data at the plant, operating sequence, task or element level - or any combination of these levels. A complete description of the crew task data contained in the database is presented in NUREG/CR-3371, Task Analysis of Nuclear Power Plant Control Room Crews (Volumes 1 and 2)

  11. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to

  12. An end-user perspective on smart home energy systems in the PowerMatching City demonstration project

    NARCIS (Netherlands)

    Geelen, D.V.; Vos-Vlamings, M.; Fillippidou, F.; van den Noort, A.; van Grootel, M.; Moll, H.; Reinders, Angelina H.M.E.; Keyson, D.

    2013-01-01

    In discussions on smart grids, it is often stated that residential end-users will play a more active role in the management of the electric power system. Experience in practice on how to empower end-users for such a role is however limited. This paper presents a field study in the first phase of the

  13. An end-user perspective on smart home energy systems in the PowerMatching City demonstration project

    NARCIS (Netherlands)

    Geelen, Daphne; Vos-Vlamings, Manon; Filippidou, Faidra; van den Noort, Albert; van Grootel, Maike; Moll, Henri C.; Reinders, Angèle; Keyson, David

    In discussions on smart grids, it is often stated that residential end-users will play a more active role in the management of the electric power system. Experience in practice on how to empower end-users for such a role is however limited. This paper presents a field study in the first phase of the

  14. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    Science.gov (United States)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  15. Perceptions of fall circumstances, injuries and recovery techniques among power wheelchair users: a qualitative study.

    Science.gov (United States)

    Rice, Laura A; Sung, JongHun; Peters, Joseph; Bartlo, Wendy D; Sosnoff, Jacob J

    2018-04-01

    To understand the circumstances surrounding the worst fall experienced by power wheelchair users in the past year and to examine injuries sustained and recovery methods. A qualitative study using a semi-structured interview. Community. A self-selected volunteer sample of 19 power wheelchair users who utilize their device for at least 75% of mobility. The most common disability represented was cerebral palsy ( n = 8). The mean (SD) age of participants was 41.9 (7.6) years, who lived with their disability for a mean (SD) of 20.5 (8.62) years and used their current device for a mean (SD) of 3.9 (1.9) years. None. A semi-structured interview examined the circumstances surrounding the worst fall experienced in the past year, injuries sustained and recovery techniques used. Upon examination of the circumstances of the worst fall, four main themes emerged: (1) action-related fall contributors, (2) location of falls, (3) fall attributions and (4) time of fall. Each fall described was found to involve multiple factors. As a result of the fall, participants also reported the occurrence of physical injuries and a fear of falling. Physical injuries ranged from skin abrasion and bruises to fractures and head injuries. Participants also reported that fear of falling diminished their desire to participation in activities they enjoyed doing. Finally, most participants reported the need for physical assistance to recover from a fall. Participant descriptions provide an in-depth description of the circumstances and aftermath of falls experienced by power wheelchair users.

  16. Application of Supercomputer Technologies for Simulation Of Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Vladimir Valentinovich Okrepilov

    2015-06-01

    Full Text Available To date, an extensive experience has been accumulated in investigation of problems related to quality, assessment of management systems, modeling of economic system sustainability. The performed studies have created a basis for development of a new research area — Economics of Quality. Its tools allow to use opportunities of model simulation for construction of the mathematical models adequately reflecting the role of quality in natural, technical, social regularities of functioning of the complex socio-economic systems. Extensive application and development of models, and also system modeling with use of supercomputer technologies, on our deep belief, will bring the conducted research of socio-economic systems to essentially new level. Moreover, the current scientific research makes a significant contribution to model simulation of multi-agent social systems and that is not less important, it belongs to the priority areas in development of science and technology in our country. This article is devoted to the questions of supercomputer technologies application in public sciences, first of all, — regarding technical realization of the large-scale agent-focused models (AFM. The essence of this tool is that owing to the power computer increase it has become possible to describe the behavior of many separate fragments of a difficult system, as socio-economic systems are. The article also deals with the experience of foreign scientists and practicians in launching the AFM on supercomputers, and also the example of AFM developed in CEMI RAS, stages and methods of effective calculating kernel display of multi-agent system on architecture of a modern supercomputer will be analyzed. The experiments on the basis of model simulation on forecasting the population of St. Petersburg according to three scenarios as one of the major factors influencing the development of socio-economic system and quality of life of the population are presented in the

  17. Experiences with local power production at the end-user (case studies)

    International Nuclear Information System (INIS)

    Morch, Andrei Z.; Grinden, Bjoern; Fleten, Stein-Erik; Maribu, Karl Magnus; Johansen, Boerre; Vanebo, Torstein; Berner, Monica; Stang, Jacob; Naesje, Paal

    2004-12-01

    This report describes the results of case studies performed as part of the project 'Local power production at the end-user'. It was found that the construction of a power plant, even a small one, is not an easy task. It is important to exploit the resources optimally and to allow for many income and cost elements in calculating the profitability. It may therefore be worth while to consult qualified assistance at an early stage during the planning of such construction projects. It is also found that there are clear scale effects in the development of small power plants, that is, the relative costs (NOK/kWh) are larger for small plants than for bigger ones. This is true of both investment costs and operation costs. This will affect the profitability. Of 16 plants, there is enough data for analysing 15, although estimates must be used instead of real data where the latter is missing. For eight of the 15, the net present value is positive at a power price of 25 oere/kWh, 6% discount rate and 15-20 years lifetime. For the other plants it will be more economical to wait until the market price of power rises. The price that is necessary to make these plants profitable varies from 32 to 45 oere/kWh. Basically it is assumed that the developer of small power plants (up to 10 MW) wants to produce as much energy (kWh) as possible from the plant, both to cover his own consumption and to sell surplus energy to the power market. However, there is an economically interesting market for power (kW), in which the producer (over 25 MW) is paid for being ready to start production on short notice

  18. Radiation effects on electronic equipment: a designers'/users' guide for the nuclear power industry

    International Nuclear Information System (INIS)

    Sharp, R.E.; Garlick, D.R.

    1994-01-01

    The Designers'/Users' Guide to the effects of radiation on electronics is published by the Radiation Testing Service of AEA Technology. The aim of the Guide is to document the available information that we have generated and collected over some ten years whilst operating as a radiation effects and design consultancy to the nuclear power industry. We hope that this will enable workers within the industry better to understand the likely effects of radiation on the system or plant being designed and so minimise the problems that can arise. (Author)

  19. User evaluation of photovoltaic-powered vaccine refrigerator/freezer systems

    Science.gov (United States)

    Ratajczak, Anthony F.

    1987-03-01

    The NASA Lewis Research Center has concluded a project to develop and field test photovoltaic-powered refrigerator/freezers for vaccine storage in remote areas of developing countries. As a conclusion to this project, questionnaires were sent to the in-country administrators for each test site probing user acceptance of the systems and attitudes regarding procurement of additional systems. Responses indicate that the systems had a positive effect on the local communities, that they made a positive impression on the local health authorities, and that system cost and scarcity of funds are the major barriers to procurements of additional systems.

  20. ITS Version 3.0: Powerful, user-friendly software for radiation modelling

    International Nuclear Information System (INIS)

    Kensek, R.P.; Halbleib, J.A.; Valdez, G.D.

    1993-01-01

    ITS (the Integrated Tiger Series) is a powerful, but user-friendly, software package permitting state-of-the-art modelling of electron and/or photon radiation effects. The programs provide Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields. The ITS system combines operational simplicity and physical accuracy in order to provide experimentalist and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems

  1. User-friendly Tool for Power Flow Analysis and Distributed Generation Optimisation in Radial Distribution Networks

    Directory of Open Access Journals (Sweden)

    M. F. Akorede

    2017-06-01

    Full Text Available The intent of power distribution companies (DISCOs is to deliver electric power to their customers in an efficient and reliable manner – with minimal energy loss cost. One major way to minimise power loss on a given power system is to install distributed generation (DG units on the distribution networks. However, to maximise benefits, it is highly crucial for a DISCO to ensure that these DG units are of optimal size and sited in the best locations on the network. This paper gives an overview of a software package developed in this study, called Power System Analysis and DG Optimisation Tool (PFADOT. The main purpose of the graphical user interface-based package is to guide a DISCO in finding the optimal size and location for DG placement in radial distribution networks. The package, which is also suitable for load flow analysis, employs the GUI feature of MATLAB. Three objective functions are formulated into a single optimisation problem and solved with fuzzy genetic algorithm to simultaneously obtain DG optimal size and location. The accuracy and reliability of the developed tool was validated using several radial test systems, and the results obtained are evaluated against the existing similar package cited in the literature, which are impressive and computationally efficient.

  2. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  3. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    Science.gov (United States)

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  4. Supercomputing - Use Cases, Advances, The Future (1/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the first day, we will focus on the history and theory of supercomputing, the top500 list and the hardware that makes supercomputers tick. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP an...

  5. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sarje, Abhinav [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jacobsen, Douglas W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ringler, Todd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  6. Feature determination from powered wheelchair user joystick input characteristics for adapting driving assistance [version 3; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Michael Gillham

    2018-05-01

    Full Text Available Background: Many powered wheelchair users find their medical condition and their ability to drive the wheelchair will change over time. In order to maintain their independent mobility, the powered chair will require adjustment over time to suit the user's needs, thus regular input from healthcare professionals is required. These limited resources can result in the user having to wait weeks for appointments, resulting in the user losing independent mobility, consequently affecting their quality of life and that of their family and carers. In order to provide an adaptive assistive driving system, a range of features need to be identified which are suitable for initial system setup and can automatically provide data for re-calibration over the long term. Methods: A questionnaire was designed to collect information from powered wheelchair users with regard to their symptoms and how they changed over time. Another group of volunteer participants were asked to drive a test platform and complete a course which represented manoeuvring in a very confined space as quickly as possible. Two of those participants were also monitored over a longer period in their normal home daily environment. Features, thought to be suitable, were examined using pattern recognition classifiers to determine their suitability for identifying the changing user input over time. Results: The results are not designed to provide absolute insight into the individual user behaviour, as no ground truth of their ability has been determined, they do nevertheless demonstrate the utility of the measured features to provide evidence of the users’ changing ability over time whilst driving a powered wheelchair. Conclusions: Determining the driving features and adjustable elements provides the initial step towards developing an adaptable assistive technology for the user when the ground truths of the individual and their machine have been learned by a smart pattern recognition system.

  7. RAY-UI: A powerful and extensible user interface for RAY

    Energy Technology Data Exchange (ETDEWEB)

    Baumgärtel, P., E-mail: peter.baumgaertel@helmholtz-berlin.de; Erko, A.; Schäfers, F. [Institute for Nanometre Optics and Technology Helmholtz Zentrum Berlin für Materialien und Energie Albert-Einstein-Str. 15, 12489 Berlin (Germany); Witt, M. [Department Operation Accelerator BESSY II Helmholtz Zentrum Berlin für Materialien und Energie Albert-Einstein-Str. 15, 12489 Berlin (Germany); Baensch, J.; Fabarius, M.; Schirmacher, H. [Fachbereich VI – Informatik und Medien Beuth Hochschule für Technik Berlin, Luxemburger Str. 10, 13353 Berlin (Germany)

    2016-07-27

    The RAY-UI project started as a proof-of-concept for an interactive and graphical user interface (UI) for the well-known ray tracing software RAY [1]. In the meantime, it has evolved into a powerful enhanced version of RAY that will serve as the platform for future development and improvement of associated tools. The software as of today supports nearly all sophisticated simulation features of RAY. Furthermore, it delivers very significant usability and work efficiency improvements. Beamline elements can be quickly added or removed in the interactive sequence view. Parameters of any selected element can be accessed directly and in arbitrary order. With a single click, parameter changes can be tested and new simulation results can be obtained. All analysis results can be explored interactively right after ray tracing by means of powerful integrated image viewing and graphing tools. Unlimited image planes can be positioned anywhere in the beamline, and bundles of image planes can be created for moving the plane along the beam to identify the focus position with live updates of the simulated results. In addition to showing the features and workflow of RAY-UI, we will give an overview of the underlying software architecture as well as examples for use and an outlook for future developments.

  8. Convex unwraps its first grown-up supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Manuel, T.

    1988-03-03

    Convex Computer Corp.'s new supercomputer family is even more of an industry blockbuster than its first system. At a tenfold jump in performance, it's far from just an incremental upgrade over its first minisupercomputer, the C-1. The heart of the new family, the new C-2 processor, churning at 50 million floating-point operations/s, spawns a group of systems whose performance could pass for some fancy supercomputers-namely those of the Cray Research Inc. family. When added to the C-1, Convex's five new supercomputers create the C series, a six-member product group offering a performance range from 20 to 200 Mflops. They mark an important transition for Convex from a one-product high-tech startup to a multinational company with a wide-ranging product line. It's a tough transition but the Richardson, Texas, company seems to be doing it. The extended product line propels Convex into the upper end of the minisupercomputer class and nudges it into the low end of the big supercomputers. It positions Convex in an uncrowded segment of the market in the $500,000 to $1 million range offering 50 to 200 Mflops of performance. The company is making this move because the minisuper area, which it pioneered, quickly became crowded with new vendors, causing prices and gross margins to drop drastically.

  9. QCD on the BlueGene/L Supercomputer

    International Nuclear Information System (INIS)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-01-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented

  10. QCD on the BlueGene/L Supercomputer

    Science.gov (United States)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-03-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.

  11. Supercomputers and the future of computational atomic scattering physics

    International Nuclear Information System (INIS)

    Younger, S.M.

    1989-01-01

    The advent of the supercomputer has opened new vistas for the computational atomic physicist. Problems of hitherto unparalleled complexity are now being examined using these new machines, and important connections with other fields of physics are being established. This talk briefly reviews some of the most important trends in computational scattering physics and suggests some exciting possibilities for the future. 7 refs., 2 figs

  12. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 2

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  13. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 1

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  14. Role of supercomputers in magnetic fusion and energy research programs

    International Nuclear Information System (INIS)

    Killeen, J.

    1985-06-01

    The importance of computer modeling in magnetic fusion (MFE) and energy research (ER) programs is discussed. The need for the most advanced supercomputers is described, and the role of the National Magnetic Fusion Energy Computer Center in meeting these needs is explained

  15. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; D' Azevedo, Eduardo [ORNL; Philip, Bobby [ORNL; Worley, Patrick H [ORNL

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  16. Computational Science with the Titan Supercomputer: Early Outcomes and Lessons Learned

    Science.gov (United States)

    Wells, Jack

    2014-03-01

    Modeling and simulation with petascale computing has supercharged the process of innovation and understanding, dramatically accelerating time-to-insight and time-to-discovery. This presentation will focus on early outcomes from the Titan supercomputer at the Oak Ridge National Laboratory. Titan has over 18,000 hybrid compute nodes consisting of both CPUs and GPUs. In this presentation, I will discuss the lessons we have learned in deploying Titan and preparing applications to move from conventional CPU architectures to a hybrid machine. I will present early results of materials applications running on Titan and the implications for the research community as we prepare for exascale supercomputer in the next decade. Lastly, I will provide an overview of user programs at the Oak Ridge Leadership Computing Facility with specific information how researchers may apply for allocations of computing resources. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  17. The potential impact of intelligent power wheelchair use on social participation: perspectives of users, caregivers and clinicians.

    Science.gov (United States)

    Rushton, Paula W; Kairy, Dahlia; Archambault, Philippe; Pituch, Evelina; Torkia, Caryne; El Fathi, Anas; Stone, Paula; Routhier, François; Forget, Robert; Pineau, Joelle; Gourdeau, Richard; Demers, Louise

    2015-05-01

    To explore power wheelchair users', caregivers' and clinicians' perspectives regarding the potential impact of intelligent power wheelchair use on social participation. Semi-structured interviews were conducted with power wheelchair users (n = 12), caregivers (n = 4) and clinicians (n = 12). An illustrative video was used to facilitate discussion. The transcribed interviews were analyzed using thematic analysis. Three main themes were identified based on the experiences of the power wheelchair users, caregivers and clinicians: (1) increased social participation opportunities, (2) changing how social participation is experienced and (3) decreased risk of accidents during social participation. Findings from this study suggest that an intelligent power wheelchair would enhance social participation in a variety of important ways, thereby providing support for continued design and development of this assistive technology. An intelligent power wheelchair has the potential to: Increase social participation opportunities by overcoming challenges associated with navigating through crowds and small spaces. Change how social participation is experienced through "normalizing" social interactions and decreasing the effort required to drive a power wheelchair. Decrease the risk of accidents during social participation by reducing the need for dangerous compensatory strategies and minimizing the impact of the physical environment.

  18. Integration of Panda Workload Management System with supercomputers

    Science.gov (United States)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  19. A MATLAB Graphical User Interface Dedicated to the Optimal Design of the High Power Induction Motor with Heavy Starting Conditions

    Directory of Open Access Journals (Sweden)

    Maria Brojboiu

    2014-09-01

    Full Text Available In this paper, a Matlab graphical user interface dedicated to the optimal design of the high power induction motor with heavy starting conditions is presented. This graphical user interface allows to input the rated parameters, the selection of the induction motor type and the optimization criterion of the induction motor design also. For the squirrel cage induction motor the graphical user interface allows the selection of the rotor bar geometry, the material of the rotor bar as well as the fastening technology of the shorting ring on the rotor bar. The Matlab graphical user interface is developed and applied to the general optimal design program of the induction motor described in [1], [2].

  20. A supercomputer for parallel data analysis

    International Nuclear Information System (INIS)

    Kolpakov, I.F.; Senner, A.E.; Smirnov, V.A.

    1987-01-01

    The project of a powerful multiprocessor system is proposed. The main purpose of the project is to develop a low cost computer system with a processing rate of a few tens of millions of operations per second. The system solves many problems of data analysis from high-energy physics spectrometers. It includes about 70 MOTOROLA-68020 based powerful slave microprocessor boards liaisoned through the VME crates to a host VAX micro computer. Each single microprocessor board performs the same algorithm requiring large computing time. The host computer distributes data over the microprocessor board, collects and combines obtained results. The architecture of the system easily allows one to use it in the real time mode

  1. Extent to Which Caregivers Enhance the Wheelchair Skills Capacity and Confidence of Power Wheelchair Users: A Cross-Sectional Study.

    Science.gov (United States)

    Kirby, R Lee; Rushton, Paula W; Routhier, Francois; Demers, Louise; Titus, Laura; Miller-Polgar, Jan; Smith, Cher; McAllister, Mike; Theriault, Chris; Matheson, Kara; Parker, Kim; Sawatzky, Bonita; Labbé, Delphine; Miller, William C

    2018-01-03

    To test the hypothesis that caregivers enhance the wheelchair skills capacity and confidence of the power wheelchair users to whom they provide assistance, and to describe the nature of that assistance. Multicenter cross-sectional study. Rehabilitation centers and communities. Participants (N=152) included caregivers (n=76) and wheelchair users (n=76). None. Version 4.3 of the Wheelchair Skills Test (WST) and the Wheelchair Skills Test-Questionnaire (WST-Q). For each of the 30 individual skills, we recorded data about the wheelchair user alone and in combination (blended) with the caregiver. The mean total WST capacity scores ± SD for the wheelchair users alone and blended were 78.1%±9.3% and 92.4%±6.1%, respectively, with a mean difference of 14.3%±8.7% (Pskills capacity and confidence of the power wheelchair users to whom they provide assistance, and they do so in a variety of ways. These findings have significance for wheelchair skills assessment and training. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  2. Achievable data rate in spectrum-sharing channels with variable-rate variable-power primary users

    KAUST Repository

    Yang, Yuli

    2012-08-01

    In this work, we propose a transmission strategy for secondary users (SUs) within a cognitive radio network where primary users (PUs) exploit variable-rate variable-power modulation. By monitoring the PU\\'s transmissions, the SU adjusts its transmit power based on the gap between the PU\\'s received effective signal-to-noise power ratio (SNR) and the lower SNR boundary for the modulation mode that is being used in the primary link. Thus, at the SU\\'s presence, the PU\\'s quality of service (QoS) is guaranteed without increasing its processing complexity thanks to no interference cancellation required in the PU\\'s operation. To demonstrate the advantage of our proposed transmission strategy, we analyze the secondary user\\'s achievable data rate by taking into account different transmission capabilities for the secondary transmitter. The corresponding numerical results not only prove the validity of our derivations but also provide a convenient tool for the network design with the proposed transmission strategy. © 2012 IEEE.

  3. Feynman diagrams sampling for quantum field theories on the QPACE 2 supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Rappl, Florian

    2016-08-01

    This work discusses the application of Feynman diagram sampling in quantum field theories. The method uses a computer simulation to sample the diagrammatic space obtained in a series expansion. For running large physical simulations powerful computers are obligatory, effectively splitting the thesis in two parts. The first part deals with the method of Feynman diagram sampling. Here the theoretical background of the method itself is discussed. Additionally, important statistical concepts and the theory of the strong force, quantum chromodynamics, are introduced. This sets the context of the simulations. We create and evaluate a variety of models to estimate the applicability of diagrammatic methods. The method is then applied to sample the perturbative expansion of the vertex correction. In the end we obtain the value for the anomalous magnetic moment of the electron. The second part looks at the QPACE 2 supercomputer. This includes a short introduction to supercomputers in general, as well as a closer look at the architecture and the cooling system of QPACE 2. Guiding benchmarks of the InfiniBand network are presented. At the core of this part, a collection of best practices and useful programming concepts are outlined, which enables the development of efficient, yet easily portable, applications for the QPACE 2 system.

  4. Use of high performance networks and supercomputers for real-time flight simulation

    Science.gov (United States)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  5. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; Kumar, Jitendra [ORNL; Mills, Richard T. [Argonne National Laboratory; Hoffman, Forrest M. [ORNL; Sripathi, Vamsi [Intel Corporation; Hargrove, William Walter [United States Department of Agriculture (USDA), United States Forest Service (USFS)

    2017-09-01

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like the Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.

  6. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  7. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    AUTHOR|(SzGeCERN)643806; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Wenaus, Torre

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for jo...

  8. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  9. Tryton Supercomputer Capabilities for Analysis of Massive Data Streams

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2015-09-01

    Full Text Available The recently deployed supercomputer Tryton, located in the Academic Computer Center of Gdansk University of Technology, provides great means for massive parallel processing. Moreover, the status of the Center as one of the main network nodes in the PIONIER network enables the fast and reliable transfer of data produced by miscellaneous devices scattered in the area of the whole country. The typical examples of such data are streams containing radio-telescope and satellite observations. Their analysis, especially with real-time constraints, can be challenging and requires the usage of dedicated software components. We propose a solution for such parallel analysis using the supercomputer, supervised by the KASKADA platform, which with the conjunction with immerse 3D visualization techniques can be used to solve problems such as pulsar detection and chronometric or oil-spill simulation on the sea surface.

  10. Use of a graphical user interface approach for digital and physical simulation in power systems control education

    International Nuclear Information System (INIS)

    Shoults, R.R.; Barrera-Cardiel, E.

    1992-01-01

    This paper presents the design of a laboratory with software and hardware structures for digital and physical simulation in the area of Power Systems Control Education. The hardware structure includes a special man-machine interface designed with a graphical user interface approach. This interface allows the user full control over the simulation and provides facilities for the study of the response of the simulated system. This approach is illustrated with the design of a control system for a physically based HVDC transmission system model

  11. Visualizing quantum scattering on the CM-2 supercomputer

    International Nuclear Information System (INIS)

    Richardson, J.L.

    1991-01-01

    We implement parallel algorithms for solving the time-dependent Schroedinger equation on the CM-2 supercomputer. These methods are unconditionally stable as well as unitary at each time step and have the advantage of being spatially local and explicit. We show how to visualize the dynamics of quantum scattering using techniques for visualizing complex wave functions. Several scattering problems are solved to demonstrate the use of these methods. (orig.)

  12. Development of seismic tomography software for hybrid supercomputers

    Science.gov (United States)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on

  13. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  14. Intelligent Personal Supercomputer for Solving Scientific and Technical Problems

    Directory of Open Access Journals (Sweden)

    Khimich, O.M.

    2016-09-01

    Full Text Available New domestic intellіgent personal supercomputer of hybrid architecture Inparkom_pg for the mathematical modeling of processes in the defense industry, engineering, construction, etc. was developed. Intelligent software for the automatic research and tasks of computational mathematics with approximate data of different structures was designed. Applied software to provide mathematical modeling problems in construction, welding and filtration processes was implemented.

  15. Cellular-automata supercomputers for fluid-dynamics modeling

    International Nuclear Information System (INIS)

    Margolus, N.; Toffoli, T.; Vichniac, G.

    1986-01-01

    We report recent developments in the modeling of fluid dynamics, and give experimental results (including dynamical exponents) obtained using cellular automata machines. Because of their locality and uniformity, cellular automata lend themselves to an extremely efficient physical realization; with a suitable architecture, an amount of hardware resources comparable to that of a home computer can achieve (in the simulation of cellular automata) the performance of a conventional supercomputer

  16. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  17. New catenated OFDM modulation scheme in zero cross correlation OCDMA at various number of user and effective power

    Directory of Open Access Journals (Sweden)

    Nawawi N. M.

    2017-01-01

    Full Text Available This paper proposes an integration of optical Code Division Multiple Access (OCDMA with new catenated Orthogonal Frequency Division Multiplexing (OFDM modulation scheme. This effective combination based on Zero Cross Correlation (ZCC code can enhanced the system capacity and increased spectral efficiency by fully utilizing the available electrical bandwidth. We investigate the performance of the proposed system for various number of user, number of weight and effective power. The performance assessment is carried out by means of the signal to noise ratio (SNR and bit error rate (BER for up to five catenated OFDM bands transmitted simultaneously through optical link at 622 Mbps. More specifically, mathematical expressions for SNR and BER performance are derived. The corresponding numerical results are presented and compared with traditional OCDMA-ZCC system to verified the feasibility of the proposed system. The results show that with OCDMA/catenated-OFDM based on ZCC code provides 86% more number of permissible user for SNR of 15 dB. In addition, this integration provides higher receiver sensitivity; an approximately –22.5 dBm for 20 number of user with 8 number of weight. It is also found that, to accommodate more user, the system requires higher effective power at the receiver.

  18. User intent prediction with a scaled conjugate gradient trained artificial neural network for lower limb amputees using a powered prosthesis.

    Science.gov (United States)

    Woodward, Richard B; Spanias, John A; Hargrove, Levi J

    2016-08-01

    Powered lower limb prostheses have the ability to provide greater mobility for amputee patients. Such prostheses often have pre-programmed modes which can allow activities such as climbing stairs and descending ramps, something which many amputees struggle with when using non-powered limbs. Previous literature has shown how pattern classification can allow seamless transitions between modes with a high accuracy and without any user interaction. Although accurate, training and testing each subject with their own dependent data is time consuming. By using subject independent datasets, whereby a unique subject is tested against a pooled dataset of other subjects, we believe subject training time can be reduced while still achieving an accurate classification. We present here an intent recognition system using an artificial neural network (ANN) with a scaled conjugate gradient learning algorithm to classify gait intention with user-dependent and independent datasets for six unilateral lower limb amputees. We compare these results against a linear discriminant analysis (LDA) classifier. The ANN was found to have significantly lower classification error (P<;0.05) than LDA with all user-dependent step-types, as well as transitional steps for user-independent datasets. Both types of classifiers are capable of making fast decisions; 1.29 and 2.83 ms for the LDA and ANN respectively. These results suggest that ANNs can provide suitable and accurate offline classification in prosthesis gait prediction.

  19. User-friendly tool for power flow analysis and distributed generation ...

    African Journals Online (AJOL)

    The intent of power distribution companies (DISCOs) is to deliver electric power to their ... One major way to minimise power loss on a given power system is to install ... The accuracy and reliability of the developed tool was validated using ...

  20. An Enhanced Graphical User Interface for Analyzing the Vulnerability of Electrical Power Systems to Terrorist Attacks

    National Research Council Canada - National Science Library

    Stathakos, Dimitrios

    2003-01-01

    ...) Conforming to Windows standards, the new OD GUI incorporates advanced graphical features, which help the user visualize the model and understand the consequences of interdiction The new ODs also...

  1. High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers

    Science.gov (United States)

    Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.

    2017-12-01

    The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and

  2. EDF's experience with supercomputing and challenges ahead - towards multi-physics and multi-scale approaches

    International Nuclear Information System (INIS)

    Delbecq, J.M.; Banner, D.

    2003-01-01

    Nuclear power plants are a major asset of the EDF company. To remain so, in particular in a context of deregulation, competitiveness, safety and public acceptance are three conditions. These stakes apply both to existing plants and to future reactors. The purpose of the presentation is to explain how supercomputing can help EDF to satisfy these requirements. Three examples are described in detail: ensuring optimal use of nuclear fuel under wholly safe conditions, understanding and simulating the material deterioration mechanisms and moving forward with numerical simulation for the performance of EDF's activities. In conclusion, a broader vision of EDF long term R and D in the field of numerical simulation is given and especially of five challenges taken up by EDF together with its industrial and scientific partners. (author)

  3. Simulation of dynamic response of nuclear power plant based on user-defined model in PSASP

    International Nuclear Information System (INIS)

    Zhao Jie; Liu Dichen; Xiong Li; Chen Qi; Du Zhi; Lei Qingsheng

    2010-01-01

    Based on the energy transformation regularity in physical process of pressurized water reactors (PWR), PWR NPP models are established in PSASP (Power System Analysis Software Package), which are applicable for calculating the dynamic process of PWR NPP and power system transient stabilization. The power dynamic characteristics of PWR NPP is simulated and analyzed, including the PWR self-stability, self-regulation and power step responses under power regulation system. The results indicate that the PWR NPP can afford certain exterior disturbances and 10%P n step under temperature negative feedbacks. The regulate speed of PWR power can reach 5%P n /min under the power regulation system, which meets the requirement of peak regulation in Power Grid. (authors)

  4. Supercomputers and the mathematical modeling of high complexity problems

    International Nuclear Information System (INIS)

    Belotserkovskii, Oleg M

    2010-01-01

    This paper is a review of many works carried out by members of our scientific school in past years. The general principles of constructing numerical algorithms for high-performance computers are described. Several techniques are highlighted and these are based on the method of splitting with respect to physical processes and are widely used in computing nonlinear multidimensional processes in fluid dynamics, in studies of turbulence and hydrodynamic instabilities and in medicine and other natural sciences. The advances and developments related to the new generation of high-performance supercomputing in Russia are presented.

  5. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  6. A fast random number generator for the Intel Paragon supercomputer

    Science.gov (United States)

    Gutbrod, F.

    1995-06-01

    A pseudo-random number generator is presented which makes optimal use of the architecture of the i860-microprocessor and which is expected to have a very long period. It is therefore a good candidate for use on the parallel supercomputer Paragon XP. In the assembler version, it needs 6.4 cycles for a real∗4 random number. There is a FORTRAN routine which yields identical numbers up to rare and minor rounding discrepancies, and it needs 28 cycles. The FORTRAN performance on other microprocessors is somewhat better. Arguments for the quality of the generator and some numerical tests are given.

  7. Power Consumption Efficiency Evaluation of Multi-User Full-Duplex Visible Light Communication Systems for Smart Home Technologies

    Directory of Open Access Journals (Sweden)

    Muhammad Tabish Niaz

    2017-02-01

    Full Text Available Visible light communication (VLC has recently gained significant academic and industrial attention. VLC has great potential to supplement the functioning of the upcoming radio-frequency (RF-based 5G networks. It is best suited for home, office, and commercial indoor environments as it provides a high bandwidth and high data rate, and the visible light spectrum is free to use. This paper proposes a multi-user full-duplex VLC system using red-green-blue (RGB, and white emitting diodes (LEDs for smart home technologies. It utilizes red, green, and blue LEDs for downlink transmission and a simple phosphor white LED for uplink transmission. The red and green color bands are used for user data and smart devices, respectively, while the blue color band is used with the white LED for uplink transmission. The simulation was carried out to verify the performance of the proposed multi-user full-duplex VLC system. In addition to the performance evaluation, a cost-power consumption analysis was performed by comparing the power consumption and the resulting cost of the proposed VLC system to the power consumed and resulting cost of traditional Wi-Fi based systems and hybrid systems that utilized both VLC and Wi-Fi. Our findings showed that the proposed system improved the data rate and bit-error rate performance, while minimizing the power consumption and the associated costs. These results have demonstrated that a full-duplex VLC system is a feasible solution suitable for indoor environments as it provides greater cost savings and energy efficiency when compared to traditional Wi-Fi-based systems and hybrid systems that utilize both VLC and Wi-Fi.

  8. Power to the People: End-User Building of Digital Library Collections.

    Science.gov (United States)

    Witten, Ian H.; Bainbridge, David; Boddie, Stefan J.

    Digital library systems focus principally on the reader: the consumer of the material that constitutes the library. In contrast, this paper describes an interface that makes it easy for people to build their own library collections. Collections may be built and served locally from the user's own Web server, or (given appropriate permissions)…

  9. User capacities and operation forces : Requirements for body-powered upper-limb prostheses

    NARCIS (Netherlands)

    Hichert, M.

    2017-01-01

    In the Netherlands approximately 3750 persons have an arm defect: they miss (part of) their hand, forearm or even their entire arm. The majority of these people are in the possession of a prosthesis. This prosthesis can be purely cosmetic, or offer the user some grasping function. The latter can

  10. Centralized supercomputer support for magnetic fusion energy research

    International Nuclear Information System (INIS)

    Fuss, D.; Tull, G.G.

    1984-01-01

    High-speed computers with large memories are vital to magnetic fusion energy research. Magnetohydrodynamic (MHD), transport, equilibrium, Vlasov, particle, and Fokker-Planck codes that model plasma behavior play an important role in designing experimental hardware and interpreting the resulting data, as well as in advancing plasma theory itself. The size, architecture, and software of supercomputers to run these codes are often the crucial constraints on the benefits such computational modeling can provide. Hence, vector computers such as the CRAY-1 offer a valuable research resource. To meet the computational needs of the fusion program, the National Magnetic Fusion Energy Computer Center (NMFECC) was established in 1974 at the Lawrence Livermore National Laboratory. Supercomputers at the central computing facility are linked to smaller computer centers at each of the major fusion laboratories by a satellite communication network. In addition to providing large-scale computing, the NMFECC environment stimulates collaboration and the sharing of computer codes and data among the many fusion researchers in a cost-effective manner

  11. Supercomputer algorithms for reactivity, dynamics and kinetics of small molecules

    International Nuclear Information System (INIS)

    Lagana, A.

    1989-01-01

    Even for small systems, the accurate characterization of reactive processes is so demanding of computer resources as to suggest the use of supercomputers having vector and parallel facilities. The full advantages of vector and parallel architectures can sometimes be obtained by simply modifying existing programs, vectorizing the manipulation of vectors and matrices, and requiring the parallel execution of independent tasks. More often, however, a significant time saving can be obtained only when the computer code undergoes a deeper restructuring, requiring a change in the computational strategy or, more radically, the adoption of a different theoretical treatment. This book discusses supercomputer strategies based upon act and approximate methods aimed at calculating the electronic structure and the reactive properties of small systems. The book shows how, in recent years, intense design activity has led to the ability to calculate accurate electronic structures for reactive systems, exact and high-level approximations to three-dimensional reactive dynamics, and to efficient directive and declaratory software for the modelling of complex systems

  12. The TeraGyroid Experiment – Supercomputing 2003

    Directory of Open Access Journals (Sweden)

    R.J. Blake

    2005-01-01

    Full Text Available Amphiphiles are molecules with hydrophobic tails and hydrophilic heads. When dispersed in solvents, they self assemble into complex mesophases including the beautiful cubic gyroid phase. The goal of the TeraGyroid experiment was to study defect pathways and dynamics in these gyroids. The UK's supercomputing and USA's TeraGrid facilities were coupled together, through a dedicated high-speed network, into a single computational Grid for research work that peaked around the Supercomputing 2003 conference. The gyroids were modeled using lattice Boltzmann methods with parameter spaces explored using many 1283 and 3grid point simulations, this data being used to inform the world's largest three-dimensional time dependent simulation with 10243-grid points. The experiment generated some 2 TBytes of useful data. In terms of Grid technology the project demonstrated the migration of simulations (using Globus middleware to and fro across the Atlantic exploiting the availability of resources. Integration of the systems accelerated the time to insight. Distributed visualisation of the output datasets enabled the parameter space of the interactions within the complex fluid to be explored from a number of sites, informed by discourse over the Access Grid. The project was sponsored by EPSRC (UK and NSF (USA with trans-Atlantic optical bandwidth provided by British Telecommunications.

  13. User instructions for levelized power generation cost codes using an IBM-type PC

    International Nuclear Information System (INIS)

    Coen, J.J.; Delene, J.G.

    1989-01-01

    Programs for the calculation of levelized power generation costs using an IBM or compatible PC are described. Cost calculations for nuclear plants and coal-fired plants include capital investment cost, operation and maintenance cost, fuel cycle cost, decommissioning cost, and total levelized power generation cost. 7 refs., 36 figs., 4 tabs

  14. TPDWR2: thermal power determination for Westinghouse reactors, Version 2. User's guide

    International Nuclear Information System (INIS)

    Kaczynski, G.M.; Woodruff, R.W.

    1985-12-01

    TPDWR2 is a computer program which was developed to determine the amount of thermal power generated by any Westinghouse nuclear power plant. From system conditions, TPDWR2 calculates enthalpies of water and steam and the power transferred to or from various components in the reactor coolant system and to or from the chemical and volume control system. From these results and assuming that the reactor core is operating at constant power and is at thermal equilibrium, TPDWR2 calculates the thermal power generated by the reactor core. TPDWR2 runs on the IBM PC and XT computers when IBM Personal Computer DOS, Version 2.00 or 2.10, and IBM Personal Computer Basic, Version D2.00 or D2.10, are stored on the same diskette with TPDWR2

  15. The Influence of Social Networks on the Development of Recruitment Actions that Favor User Interface Design and Conversions in Mobile Applications Powered by Linked Data

    OpenAIRE

    Palos-Sanchez, Pedro R.; Saura, Jose Ramon; Debasa, Felipe

    2018-01-01

    This study analyzes the most important influence factors in the literature, which have the greatest influence on the conversions obtained in a mobile application powered by linked data. With the study of user interface design and a small user survey (n = 101,053), we studied the influence of social networks, advertising, and promotional and recruitment actions in conversions for mobile applications powered by linked data. The analysis of the users’ behavior and their application in the design...

  16. Renewable Energy Power System Modular SIMulators: RPM-Sim User's Guide (Supersedes October 1999 edition)

    Energy Technology Data Exchange (ETDEWEB)

    Bialasiewicz, J.T.; Muljadi, E.; Nix, G.R.; Drouilhet, S.

    2001-03-28

    This version of the RPM-SIM User's Guide supersedes the October 1999 edition. Using the VisSimTM visual environment, researchers developed a modular simulation system to facilitate an application-specific, low-cost study of the system dynamics for wind-diesel hybrid power systems. This manual presents the principal modules of the simulator and, using case studies of a hybrid system, demonstrates some of the benefits that can be gained from understanding the effects of the designer's modifications to these complex dynamic systems.

  17. KfK-seminar series on supercomputing und visualization from May till September 1992

    International Nuclear Information System (INIS)

    Hohenhinnebusch, W.

    1993-05-01

    During the period of may 1992 to september 1992 a series of seminars was held at KfK on several topics of supercomputing in different fields of application. The aim was to demonstrate the importance of supercomputing and visualization in numerical simulations of complex physical and technical phenomena. This report contains the collection of all submitted seminar papers. (orig./HP) [de

  18. Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data

    Science.gov (United States)

    Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.

    2018-03-01

    One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.

  19. Automatic discovery of the communication network topology for building a supercomputer model

    Science.gov (United States)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  20. Joystick-controlled video console game practice for developing power wheelchairs users' indoor driving skills.

    Science.gov (United States)

    Huang, Wei Pin; Wang, Chia Cheng; Hung, Jo Hua; Chien, Kai Chun; Liu, Wen-Yu; Cheng, Chih-Hsiu; Ng, How-Hing; Lin, Yang-Hua

    2015-02-01

    [Purpose] This study aimed to determine the effectiveness of joystick-controlled video console games in enhancing subjects' ability to control power wheelchairs. [Subjects and Methods] Twenty healthy young adults without prior experience of driving power wheelchairs were recruited. Four commercially available video games were used as training programs to practice joystick control in catching falling objects, crossing a river, tracing the route while floating on a river, and navigating through a garden maze. An indoor power wheelchair driving test, including straight lines, and right and left turns, was completed before and after the video game practice, during which electromyographic signals of the upper limbs were recorded. The paired t-test was used to compare the differences in driving performance and muscle activities before and after the intervention. [Results] Following the video game intervention, participants took significantly less time to complete the course, with less lateral deviation when turning the indoor power wheelchair. However, muscle activation in the upper limbs was not significantly affected. [Conclusion] This study demonstrates the feasibility of using joystick-controlled commercial video games to train individuals in the control of indoor power wheelchairs.

  1. SUPERCOMPUTER SIMULATION OF CRITICAL PHENOMENA IN COMPLEX SOCIAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Petrus M.A. Sloot

    2014-09-01

    Full Text Available The paper describes a problem of computer simulation of critical phenomena in complex social systems on a petascale computing systems in frames of complex networks approach. The three-layer system of nested models of complex networks is proposed including aggregated analytical model to identify critical phenomena, detailed model of individualized network dynamics and model to adjust a topological structure of a complex network. The scalable parallel algorithm covering all layers of complex networks simulation is proposed. Performance of the algorithm is studied on different supercomputing systems. The issues of software and information infrastructure of complex networks simulation are discussed including organization of distributed calculations, crawling the data in social networks and results visualization. The applications of developed methods and technologies are considered including simulation of criminal networks disruption, fast rumors spreading in social networks, evolution of financial networks and epidemics spreading.

  2. Cooperative visualization and simulation in a supercomputer environment

    International Nuclear Information System (INIS)

    Ruehle, R.; Lang, U.; Wierse, A.

    1993-01-01

    The article takes a closer look on the requirements being imposed by the idea to integrate all the components into a homogeneous software environment. To this end several methods for the distribtuion of applications in dependence of certain problem types are discussed. The currently available methods at the University of Stuttgart Computer Center for the distribution of applications are further explained. Finally the aims and characteristics of a European sponsored project, called PAGEIN, are explained, which fits perfectly into the line of developments at RUS. The aim of the project is to experiment with future cooperative working modes of aerospace scientists in a high speed distributed supercomputing environment. Project results will have an impact on the development of real future scientific application environments. (orig./DG)

  3. Lectures in Supercomputational Neurosciences Dynamics in Complex Brain Networks

    CERN Document Server

    Graben, Peter beim; Thiel, Marco; Kurths, Jürgen

    2008-01-01

    Computational Neuroscience is a burgeoning field of research where only the combined effort of neuroscientists, biologists, psychologists, physicists, mathematicians, computer scientists, engineers and other specialists, e.g. from linguistics and medicine, seem to be able to expand the limits of our knowledge. The present volume is an introduction, largely from the physicists' perspective, to the subject matter with in-depth contributions by system neuroscientists. A conceptual model for complex networks of neurons is introduced that incorporates many important features of the real brain, such as various types of neurons, various brain areas, inhibitory and excitatory coupling and the plasticity of the network. The computational implementation on supercomputers, which is introduced and discussed in detail in this book, will enable the readers to modify and adapt the algortihm for their own research. Worked-out examples of applications are presented for networks of Morris-Lecar neurons to model the cortical co...

  4. A supercomputing application for reactors core design and optimization

    International Nuclear Information System (INIS)

    Hourcade, Edouard; Gaudier, Fabrice; Arnaud, Gilles; Funtowiez, David; Ammar, Karim

    2010-01-01

    Advanced nuclear reactor designs are often intuition-driven processes where designers first develop or use simplified simulation tools for each physical phenomenon involved. Through the project development, complexity in each discipline increases and implementation of chaining/coupling capabilities adapted to supercomputing optimization process are often postponed to a further step so that task gets increasingly challenging. In the context of renewal in reactor designs, project of first realization are often run in parallel with advanced design although very dependant on final options. As a consequence, the development of tools to globally assess/optimize reactor core features, with the on-going design methods accuracy, is needed. This should be possible within reasonable simulation time and without advanced computer skills needed at project management scale. Also, these tools should be ready to easily cope with modeling progresses in each discipline through project life-time. An early stage development of multi-physics package adapted to supercomputing is presented. The URANIE platform, developed at CEA and based on the Data Analysis Framework ROOT, is very well adapted to this approach. It allows diversified sampling techniques (SRS, LHS, qMC), fitting tools (neuronal networks...) and optimization techniques (genetic algorithm). Also data-base management and visualization are made very easy. In this paper, we'll present the various implementing steps of this core physics tool where neutronics, thermo-hydraulics, and fuel mechanics codes are run simultaneously. A relevant example of optimization of nuclear reactor safety characteristics will be presented. Also, flexibility of URANIE tool will be illustrated with the presentation of several approaches to improve Pareto front quality. (author)

  5. Intelligent power wheelchair use in long-term care: potential users' experiences and perceptions.

    Science.gov (United States)

    Rushton, Paula W; Mortenson, Ben W; Viswanathan, Pooja; Wang, Rosalie H; Miller, William C; Hurd Clarke, Laura

    2017-10-01

    Long-term care (LTC) residents with cognitive impairments frequently experience limited mobility and participation in preferred activities. Although a power wheelchair could mitigate some of these mobility and participation challenges, this technology is often not prescribed for this population due to safety concerns. An intelligent power wheelchair (IPW) system represents a potential intervention that could help to overcome these concerns. The purpose of this study was to explore a) how residents experienced an IPW that used three different modes of control and b) what perceived effect the IPW would have on their daily lives. We interviewed 10 LTC residents with mild or moderate cognitive impairment twice, once before and once after testing the IPW. Interviews were conducted using a semi-structured interview guide, audio recorded and transcribed verbatim for thematic analyses. Our analyses identified three overarching themes: (1) the difference an IPW would make, (2) the potential impact of the IPW on others and (3) IPW-related concerns. Findings from this study confirm the need for and potential benefits of IPW use in LTC. Future studies will involve testing IPW improvements based on feedback and insights from this study. Implications for rehabilitation Intelligent power wheelchairs may enhance participation and improve safety and feelings of well-being for long-term care residents with cognitive impairments. Intelligent power wheelchairs could potentially have an equally positive impact on facility staff, other residents, and family and friends by decreasing workload and increasing safety.

  6. User's manual for levelized power generation cost using an IBM PC

    International Nuclear Information System (INIS)

    Fuller, L.C.

    1985-06-01

    Programs for the estimation of levelized electric power generation costs using the BASIC interpreter on an IBM PC are described. Procedures for light-water reactor plants and coal-fired plants include capital investment cost, operation and maintenance cost, fuel cycle cost, nuclear decommissioning cost, and levelized total generation cost

  7. User's manual for the BNW-II optimization code for dry/wet-cooled power plants

    International Nuclear Information System (INIS)

    Braun, D.J.; Bamberger, J.A.; Braun, D.J.; Faletti, D.W.; Wiles, L.E.

    1978-05-01

    The User's Manual describes how to operate BNW-II, a computer code developed by the Pacific Northwest Laboratory (PNL) as a part of its activities under the Department of Energy (DOE) Dry Cooling Enhancement Program. The computer program offers a comprehensive method of evaluating the cost savings potential of dry/wet-cooled heat rejection systems. Going beyond simple ''figure-of-merit'' cooling tower optimization, this method includes such items as the cost of annual replacement capacity, and the optimum split between plant scale-up and replacement capacity, as well as the purchase and operating costs of all major heat rejection components. Hence the BNW-II code is a useful tool for determining potential cost savings of new dry/wet surfaces, new piping, or other components as part of an optimized system for a dry/wet-cooled plant

  8. The Influence of Social Networks on the Development of Recruitment Actions that Favor User Interface Design and Conversions in Mobile Applications Powered by Linked Data

    Directory of Open Access Journals (Sweden)

    Pedro R. Palos-Sanchez

    2018-01-01

    Full Text Available This study analyzes the most important influence factors in the literature, which have the greatest influence on the conversions obtained in a mobile application powered by linked data. With the study of user interface design and a small user survey (n = 101,053, we studied the influence of social networks, advertising, and promotional and recruitment actions in conversions for mobile applications powered by linked data. The analysis of the users’ behavior and their application in the design of the actions to promote and capture constitutes an important part of the current theories of digital marketing. However, this study shows that its results may be contradictory and depend on other factors and circumstances when mobile applications powered by linked data are considered. The predictive value, reached by the developed model, may be useful for professionals and researchers in the field of digital marketing and the user interface design in mobile applications powered by linked data.

  9. Performance of green LTE networks powered by the smart grid with time varying user density

    KAUST Repository

    Ghazzai, Hakim

    2013-09-01

    In this study, we implement a green heuristic algorithm involving the base station sleeping strategy that aims to ensure energy saving for the radio access network of the 4GLTE (Fourth Generation Long Term Evolution) mobile networks. We propose an energy procurement model that takes into consideration the existence of multiple energy providers in the smart grid power system (e.g. fossil fuel and renewable energy sources, etc.) in addition to deployed photovoltaic panels in base station sites. Moreover, the analysis is based on the dynamic time variation of daily traffic and aims to maintain the network quality of service. Our simulation results show an important contribution in the reduction of CO2 emissions that can be reached by optimal power allocation over the active base stations. Copyright © 2013 by the Institute of Electrical and Electronic Engineers, Inc.

  10. Ubuntu Linux Toolbox 1000 + Commands for Ubuntu and Debian Power Users

    CERN Document Server

    Negus, Christopher

    2008-01-01

    In this handy, compact guide, you'll explore a ton of powerful Ubuntu Linux commands while you learn to use Ubuntu Linux as the experts do: from the command line. Try out more than 1,000 commands to find and get software, monitor system health and security, and access network resources. Then, apply the skills you learn from this book to use and administer desktops and servers running Ubuntu, Debian, and KNOPPIX or any other Linux distribution.

  11. Human factors involvement in bringing the power of AI to a heterogeneous user population

    Science.gov (United States)

    Czerwinski, Mary; Nguyen, Trung

    1994-01-01

    The Human Factors involvement in developing COMPAQ QuickSolve, an electronic problem-solving and information system for Compaq's line of networked printers, is described. Empowering customers with expert system technology so they could solve advanced networked printer problems on their own was a major goal in designing this system. This process would minimize customer down-time, reduce the number of phone calls to the Compaq Customer Support Center, improve customer satisfaction, and, most importantly, differentiate Compaq printers in the marketplace by providing the best, and most technologically advanced, customer support. This represents a re-engineering of Compaq's customer support strategy and implementation. In its first generation system, SMART, the objective was to provide expert knowledge to Compaq's help desk operation to more quickly and correctly answer customer questions and problems. QuickSolve is a second generation system in that customer support is put directly in the hands of the consumers. As a result, the design of QuickSolve presented a number of challenging issues. Because the produce would be used by a diverse and heterogeneous set of users, a significant amount of human factors research and analysis was required while designing and implementing the system. Research that shaped the organization and design of the expert system component as well.

  12. Multiscale Hy3S: Hybrid stochastic simulation for supercomputers

    Directory of Open Access Journals (Sweden)

    Kaznessis Yiannis N

    2006-02-01

    create biological systems and analyze data. We demonstrate the accuracy and efficiency of Hy3S with examples, including a large-scale system benchmark and a complex bistable biochemical network with positive feedback. The software itself is open-sourced under the GPL license and is modular, allowing users to modify it for their own purposes. Conclusion Hy3S is a powerful suite of simulation programs for simulating the stochastic dynamics of networks of biochemical reactions. Its first public version enables computational biologists to more efficiently investigate the dynamics of realistic biological systems.

  13. Los Alamos Nuclear Plant Analyzer: an interactive power-plant simulation program

    International Nuclear Information System (INIS)

    Steinke, R.; Booker, C.; Giguere, P.; Liles, D.R.; Mahaffy, J.H.; Turner, M.R.

    1984-01-01

    The Nuclear Plant Analyzer (NPA) is a computer-software interface for executing the TRAC or RELAP5 power-plant systems codes. The NPA is designed to use advanced supercomputers, long-distance data communications, and a remote workstation terminal with interactive computer graphics to analyze power-plant thermal-hydraulic behavior. The NPA interface simplifies the running of these codes through automated procedures and dialog interaction. User understanding of simulated-plant behavior is enhanced through graphics displays of calculational results. These results are displayed concurrently with the calculation. The user has the capability to override the plant's modeled control system with hardware-adjustment commands. This gives the NPA the utility of a simulator, and at the same time, the accuracy of an advanced, best-estimate, power-plant systems code for plant operation and safety analysis

  14. Nuclear Plant Analyzer: an interactive TRAC/RELAP Power-Plant Simulation Program

    International Nuclear Information System (INIS)

    Steinke, R.; Booker, C.; Giguere, P.; Liles, D.; Mahaffy, J.; Turner, M.; Wiley, R.

    1984-01-01

    The Nuclear Plant Analyzer (NPA) is a computer-software interface for executing the TRAC or RELAP5 power-plant systems codes. The NPA is designed to use advanced supercomputers, long-distance data communications, and a remote workstation terminal with interactive computer graphics to analyze power-plant thermal-hydraulic behavior. The NPA interface simplifies the running of these codes through automated procedures and dialog interaction. User understanding of simulated-plant behavior is enhanced through graphics displays of calculational results. These results are displayed concurrently with the calculation. The user has the capability to override the plant's modeled control system with hardware adjustment commands. This gives the NPA the utility of a simulator, and at the same time, the accuracy of an advanced, best-estimate, power-plant systems code for plant operation and safety analysis

  15. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  16. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    Science.gov (United States)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are

  17. Demonstration project: Load management on the user side at power shortages

    International Nuclear Information System (INIS)

    Lindskoug, Stefan

    2005-10-01

    The risk for power shortages during extreme cold weather has increased in Sweden. Comments are made that high electricity spot prices are important for holding down the demand. Through the consumers' higher price sensitivity, the electricity system can be operated with lower reserve capacity. The objective of the demonstration project is to show methods for reducing the electricity demand at the national level at high spot prices. An important prerequisite is that the measures must be profitable for all parties involved. Four separate studies were made, two concerning households, one industry and one for the district heating sector. The conclusion from the studies is that load management on the customer's side is an economic alternative to investment in new production capacity

  18. Integration of Titan supercomputer at OLCF with ATLAS Production System

    Science.gov (United States)

    Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to

  19. Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Supercomputing plays a major role in many areas of science and engineering, and it has had tremendous impact for decades in areas such as aerospace, defense, energy,...

  20. Development of user interface to support automatic program generation of nuclear power plant analysis by module-based simulation system

    International Nuclear Information System (INIS)

    Yoshikawa, Hidekazu; Mizutani, Naoki; Nakaya, Ken-ichiro; Wakabayashi, Jiro

    1988-01-01

    Module-based Simulation System (MSS) has been developed to realize a new software work environment enabling versatile dynamic simulation of a complex nuclear power system flexibly. The MSS makes full use of modern software technology to replace a large fraction of human software works in complex, large-scale program development by computer automation. Fundamental methods utilized in MSS and developmental study on human interface system SESS-1 to help users in generating integrated simulation programs automatically are summarized as follows: (1) To enhance usability and 'communality' of program resources, the basic mathematical models of common usage in nuclear power plant analysis are programed as 'modules' and stored in a module library. The information on usage of individual modules are stored in module database with easy registration, update and retrieval by the interactive management system. (2) Target simulation programs and the input/output files are automatically generated with simple block-wise languages by a precompiler system for module integration purpose. (3) Working time for program development and analysis in an example study of an LMFBR plant thermal-hydraulic transient analysis was demonstrated to be remarkably shortened, with the introduction of an interface system SESS-1 developed as an automatic program generation environment. (author)

  1. SUPERCOMPUTERS FOR AIDING ECONOMIC PROCESSES WITH REFERENCE TO THE FINANCIAL SECTOR

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2014-12-01

    Full Text Available The article discusses the use of supercomputers to support business processes with particular emphasis on the financial sector. A reference was made to the selected projects that support economic development. In particular, we propose the use of supercomputers to perform artificial intel-ligence methods in banking. The proposed methods combined with modern technology enables a significant increase in the competitiveness of enterprises and banks by adding new functionality.

  2. Adventures in supercomputing: An innovative program for high school teachers

    Energy Technology Data Exchange (ETDEWEB)

    Oliver, C.E.; Hicks, H.R.; Summers, B.G. [Oak Ridge National Lab., TN (United States); Staten, D.G. [Wartburg Central High School, TN (United States)

    1994-12-31

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology. Adventures in Supercomputing (AiS), sponsored by the U.S. Department of Energy (DOE), is such a program. It is a program for high school teachers that changes the teacher paradigm from a teacher-directed approach of teaching to a student-centered approach. {open_quotes}A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode{close_quotes}. Not only is the process of teaching changed, but the cross-curricula integration within the AiS materials is remarkable. Written from a teacher`s perspective, this paper will describe the AiS program and its effects on teachers and students, primarily at Wartburg Central High School, in Wartburg, Tennessee. The AiS program in Tennessee is sponsored by Oak Ridge National Laboratory (ORNL).

  3. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Directory of Open Access Journals (Sweden)

    De K.

    2016-01-01

    Full Text Available The Large Hadron Collider (LHC, operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed AnalysisWorkload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF, is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF’s Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  4. Symbolic simulation of engineering systems on a supercomputer

    International Nuclear Information System (INIS)

    Ragheb, M.; Gvillo, D.; Makowitz, H.

    1986-01-01

    Model-Based Production-Rule systems for analysis are developed for the symbolic simulation of Complex Engineering systems on a CRAY X-MP Supercomputer. The Fault-Tree and Event-Tree Analysis methodologies from Systems-Analysis are used for problem representation and are coupled to the Rule-Based System Paradigm from Knowledge Engineering to provide modelling of engineering devices. Modelling is based on knowledge of the structure and function of the device rather than on human expertise alone. To implement the methodology, we developed a production-Rule Analysis System that uses both backward-chaining and forward-chaining: HAL-1986. The inference engine uses an Induction-Deduction-Oriented antecedent-consequent logic and is programmed in Portable Standard Lisp (PSL). The inference engine is general and can accommodate general modifications and additions to the knowledge base. The methodologies used will be demonstrated using a model for the identification of faults, and subsequent recovery from abnormal situations in Nuclear Reactor Safety Analysis. The use of the exposed methodologies for the prognostication of future device responses under operational and accident conditions using coupled symbolic and procedural programming is discussed

  5. Micro-mechanical Simulations of Soils using Massively Parallel Supercomputers

    Directory of Open Access Journals (Sweden)

    David W. Washington

    2004-06-01

    Full Text Available In this research a computer program, Trubal version 1.51, based on the Discrete Element Method was converted to run on a Connection Machine (CM-5,a massively parallel supercomputer with 512 nodes, to expedite the computational times of simulating Geotechnical boundary value problems. The dynamic memory algorithm in Trubal program did not perform efficiently in CM-2 machine with the Single Instruction Multiple Data (SIMD architecture. This was due to the communication overhead involving global array reductions, global array broadcast and random data movement. Therefore, a dynamic memory algorithm in Trubal program was converted to a static memory arrangement and Trubal program was successfully converted to run on CM-5 machines. The converted program was called "TRUBAL for Parallel Machines (TPM." Simulating two physical triaxial experiments and comparing simulation results with Trubal simulations validated the TPM program. With a 512 nodes CM-5 machine TPM produced a nine-fold speedup demonstrating the inherent parallelism within algorithms based on the Discrete Element Method.

  6. A visual analytics system for optimizing the performance of large-scale networks in supercomputing systems

    Directory of Open Access Journals (Sweden)

    Takanori Fujiwara

    2018-03-01

    Full Text Available The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects. Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology. It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices, such as job scheduling and routing strategies. However, in order to study these temporal network behavior, we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly’s multi-level hierarchies. This paper presents such a tool–a visual analytics system–that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer. We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations. Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies, which effectively helps visual analysis tasks. We demonstrate the effectiveness of the system with a set of case studies. Our system and findings can not only help improve the communication performance of supercomputing applications, but also the network performance of next-generation supercomputers. Keywords: Supercomputing, Parallel communication network, Dragonfly networks, Time-series data, Performance analysis, Visual analytics

  7. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Swaminarayan, Sriram [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Kadau, Kai [Los Alamos National Laboratory; Fossum, Gordon C [IBM CORPORATION

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  8. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  9. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone; Manzano Franco, Joseph B.

    2012-12-31

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, we introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.

  10. Effect of power-assisted hand-rim wheelchair propulsion on shoulder load in experienced wheelchair users : A pilot study with an instrumented wheelchair

    NARCIS (Netherlands)

    Kloosterman, Marieke G. M.; Buurke, Jaap H.; de Vries, Wiebe; Van der Woude, Lucas H. V.; Rietman, Johan S.

    2015-01-01

    This study aims to compare hand-rim and power-assisted hand-rim propulsion on potential risk factors for shoulder overuse injuries: intensity and repetition of shoulder loading and force generation in the extremes of shoulder motion. Eleven experienced hand-rim wheelchair users propelled an

  11. Effect of power-assisted hand-rim wheelchair propulsion on shoulder load in experienced wheelchair users: A pilot study with an instrumented wheelchair

    NARCIS (Netherlands)

    Kloosterman, Marieke; Buurke, Jaap; de Vries, W.; de Vries, W.; van der Woude, L.H.V.; Rietman, Johan Swanik

    2015-01-01

    This study aims to compare hand-rim and power-assisted hand-rim propulsion on potential risk factors for shoulder overuse injuries: intensity and repetition of shoulder loading and force generation in the extremes of shoulder motion. Eleven experienced hand-rim wheelchair users propelled an

  12. C.A.S.H. - a transient integrated plant model for a HTR-module power plant. User manual

    International Nuclear Information System (INIS)

    Biesenbach, R.; Lauer, A.; Struth, S.

    1997-07-01

    The computer code C.A.S.H. has been developed as an integrated plant model for the HTR-Module reactor, in order to treat safety related questions about this type of power plant which require a detailed numeric simulation of the transient behaviour of the integrated plant. The present report contains the user manual for this plant model. It consists of three parts: In the first part, the code structure and functions, the course of the simulation calculations, and important code parts are described. The second part is devoted to the practical application and explains extensively the handling of the complex code system with several sample calculations. These computing cases comprise load-follow transients and the shutdown procedure of the HTR-Module and are presented and discussed with the full input data, job patterns, and numerous computer graphics. The third part contains the input manual of C.A.S.H. and is rather extensive as it includes the complete inputs of several reactor component computer codes along with the control program of the integrated plant model. (orig./DG) [de

  13. Lawrence Livermore National Laboratory selects Intel Itanium 2 processors for world's most powerful Linux cluster

    CERN Multimedia

    2003-01-01

    "Intel Corporation, system manufacturer California Digital and the University of California at Lawrence Livermore National Laboratory (LLNL) today announced they are building one of the world's most powerful supercomputers. The supercomputer project, codenamed "Thunder," uses nearly 4,000 Intel® Itanium® 2 processors... is expected to be complete in January 2004" (1 page).

  14. EDF's experience with supercomputing and challenges ahead - towards multi-physics and multi-scale approaches

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M.; Banner, D. [Electricite de France (EDF)- R and D Division, 92 - Clamart (France)

    2003-07-01

    Nuclear power plants are a major asset of the EDF company. To remain so, in particular in a context of deregulation, competitiveness, safety and public acceptance are three conditions. These stakes apply both to existing plants and to future reactors. The purpose of the presentation is to explain how supercomputing can help EDF to satisfy these requirements. Three examples are described in detail: ensuring optimal use of nuclear fuel under wholly safe conditions, understanding and simulating the material deterioration mechanisms and moving forward with numerical simulation for the performance of EDF's activities. In conclusion, a broader vision of EDF long term R and D in the field of numerical simulation is given and especially of five challenges taken up by EDF together with its industrial and scientific partners. (author)

  15. Performance Evaluation of an Intel Haswell- and Ivy Bridge-Based Supercomputer Using Scientific and Engineering Applications

    Science.gov (United States)

    Saini, Subhash; Hood, Robert T.; Chang, Johnny; Baron, John

    2016-01-01

    We present a performance evaluation conducted on a production supercomputer of the Intel Xeon Processor E5- 2680v3, a twelve-core implementation of the fourth-generation Haswell architecture, and compare it with Intel Xeon Processor E5-2680v2, an Ivy Bridge implementation of the third-generation Sandy Bridge architecture. Several new architectural features have been incorporated in Haswell including improvements in all levels of the memory hierarchy as well as improvements to vector instructions and power management. We critically evaluate these new features of Haswell and compare with Ivy Bridge using several low-level benchmarks including subset of HPCC, HPCG and four full-scale scientific and engineering applications. We also present a model to predict the performance of HPCG and Cart3D within 5%, and Overflow within 10% accuracy.

  16. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Klimentov, A [Brookhaven National Laboratory (BNL); Maeno, T [Brookhaven National Laboratory (BNL); Nilsson, P [Brookhaven National Laboratory (BNL); Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  17. Guide to dataflow supercomputing basic concepts, case studies, and a detailed example

    CERN Document Server

    Milutinovic, Veljko; Trifunovic, Nemanja; Giorgi, Roberto

    2015-01-01

    This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; revie

  18. Users’ Encounter With Normative Discourses on Facebook: A Three-Pronged Analysis of User Agency as Power Structure, Nexus, and Reception

    Directory of Open Access Journals (Sweden)

    David Mathieu

    2016-12-01

    Full Text Available This study asks whether users’ encounter with normative discourses of lifestyle, consumption, and health on social media such as Facebook gives rise to agency. The theoretical framework draws on reception analysis, for its implied, but central interest in agency that lies at the intersection of texts and audiences. Based on a critique of the “participatory paradigm,” a paradigm that situates the locus of agency in the structural opposition between senders and users, in the norms of rational deliberation or in the figure of the activist, gaps are identified which can be filled by adopting an explicit focus on the socio-cultural practices of ordinary audiences in their encounters with media discourses. The study investigates user agency on seven Facebook groups and pages with the help of a three-pronged perspective based on the notion of the media–audience relationship as (1 power structure, (2 nexus, and (3 reception. The analysis reveals that the structure at play on these Facebook groups and pages does not encourage user agency. However, user agency manifests itself through user interactions and expressive sense-making processes associated with reception. The benefits of such audience agency are a public, collective, and communicative sense-making process and an expansion of the professionally controlled text.

  19. Development of a graphical user interface for the TRAC plant/safety analysis code

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, A.E.; Harkins, C.K.; Smith, R.J.

    1995-09-01

    A graphical user interface (GUI) for the Transient Reactor Analysis Code (TRAC) has been developed at Knolls Atomic Power Laboratory. This X Window based GUI supports the design and analysis process, acting as a preprocessor, runtime editor, help system and post processor to TRAC-PF1/MOD2. TRAC was developed at the Los Alamos National Laboratory (LANL). The preprocessor is an icon-based interface which allows the user to create a TRAC model. When the model is complete, the runtime editor provides the capability to execute and monitor TRAC runs on the workstation or supercomputer. After runs are made, the output processor allows the user to extract and format data from the TRAC graphics file. The TRAC GUI is currently compatible with TRAC-PF1/MOD2 V5.3 and is available with documentation from George Niederauer, Section Leader of the Software Development Section, Group TSA-8, at LANL. Users may become functional in creating, running, and interpreting results from TRAC without having to know Unix commands and the detailed format of any of the data files. This reduces model development and debug time and increases quality control. Integration with post-processing and visualization tools increases engineering effectiveness.

  20. Development of a graphical user interface for the TRAC plant/safety analysis code

    International Nuclear Information System (INIS)

    Kelly, A.E.; Harkins, C.K.; Smith, R.J.

    1995-01-01

    A graphical user interface (GUI) for the Transient Reactor Analysis Code (TRAC) has been developed at Knolls Atomic Power Laboratory. This X Window based GUI supports the design and analysis process, acting as a preprocessor, runtime editor, help system and post processor to TRAC-PF1/MOD2. TRAC was developed at the Los Alamos National Laboratory (LANL). The preprocessor is an icon-based interface which allows the user to create a TRAC model. When the model is complete, the runtime editor provides the capability to execute and monitor TRAC runs on the workstation or supercomputer. After runs are made, the output processor allows the user to extract and format data from the TRAC graphics file. The TRAC GUI is currently compatible with TRAC-PF1/MOD2 V5.3 and is available with documentation from George Niederauer, Section Leader of the Software Development Section, Group TSA-8, at LANL. Users may become functional in creating, running, and interpreting results from TRAC without having to know Unix commands and the detailed format of any of the data files. This reduces model development and debug time and increases quality control. Integration with post-processing and visualization tools increases engineering effectiveness

  1. Parallel simulation of tsunami inundation on a large-scale supercomputer

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the

  2. Case-study of a user-driven prosthetic arm design: bionic hand versus customized body-powered technology in a highly demanding work environment.

    Science.gov (United States)

    Schweitzer, Wolf; Thali, Michael J; Egger, David

    2018-01-03

    Prosthetic arm research predominantly focuses on "bionic" but not body-powered arms. However, any research orientation along user needs requires sufficiently precise workplace specifications and sufficiently hard testing. Forensic medicine is a demanding environment, also physically, also for non-disabled people, on several dimensions (e.g., distances, weights, size, temperature, time). As unilateral below elbow amputee user, the first author is in a unique position to provide direct comparison of a "bionic" myoelectric iLimb Revolution (Touch Bionics) and a customized body-powered arm which contains a number of new developments initiated or developed by the user: (1) quick lock steel wrist unit; (2) cable mount modification; (3) cast shape modeled shoulder anchor; (4) suspension with a soft double layer liner (Ohio Willowwood) and tube gauze (Molnlycke) combination. The iLimb is mounted on an epoxy socket; a lanyard fixed liner (Ohio Willowwood) contains magnetic electrodes (Liberating Technologies). An on the job usage of five years was supplemented with dedicated and focused intensive two-week use tests at work for both systems. The side-by-side comparison showed that the customized body-powered arm provides reliable, comfortable, effective, powerful as well as subtle service with minimal maintenance; most notably, grip reliability, grip force regulation, grip performance, center of balance, component wear down, sweat/temperature independence and skin state are good whereas the iLimb system exhibited a number of relevant serious constraints. Research and development of functional prostheses may want to focus on body-powered technology as it already performs on manually demanding and heavy jobs whereas eliminating myoelectric technology's constraints seems out of reach. Relevant testing could be developed to help expediting this. This is relevant as Swiss disability insurance specifically supports prostheses that enable actual work integration. Myoelectric and

  3. Argonne National Lab deploys Force10 networks' massively dense ethernet switch for supercomputing cluster

    CERN Multimedia

    2003-01-01

    "Force10 Networks, Inc. today announced that Argonne National Laboratory (Argonne, IL) has successfully deployed Force10 E-Series switch/routers to connect to the TeraGrid, the world's largest supercomputing grid, sponsored by the National Science Foundation (NSF)" (1/2 page).

  4. Design and performance characterization of electronic structure calculations on massively parallel supercomputers

    DEFF Research Database (Denmark)

    Romero, N. A.; Glinsvad, Christian; Larsen, Ask Hjorth

    2013-01-01

    Density function theory (DFT) is the most widely employed electronic structure method because of its favorable scaling with system size and accuracy for a broad range of molecular and condensed-phase systems. The advent of massively parallel supercomputers has enhanced the scientific community...

  5. An efficient implementation of a backpropagation learning algorithm on quadrics parallel supercomputer

    International Nuclear Information System (INIS)

    Taraglio, S.; Massaioli, F.

    1995-08-01

    A parallel implementation of a library to build and train Multi Layer Perceptrons via the Back Propagation algorithm is presented. The target machine is the SIMD massively parallel supercomputer Quadrics. Performance measures are provided on three different machines with different number of processors, for two network examples. A sample source code is given

  6. Visualization environment of the large-scale data of JAEA's supercomputer system

    Energy Technology Data Exchange (ETDEWEB)

    Sakamoto, Kensaku [Japan Atomic Energy Agency, Center for Computational Science and e-Systems, Tokai, Ibaraki (Japan); Hoshi, Yoshiyuki [Research Organization for Information Science and Technology (RIST), Tokai, Ibaraki (Japan)

    2013-11-15

    On research and development of various fields of nuclear energy, visualization of calculated data is especially useful to understand the result of simulation in an intuitive way. Many researchers who run simulations on the supercomputer in Japan Atomic Energy Agency (JAEA) are used to transfer calculated data files from the supercomputer to their local PCs for visualization. In recent years, as the size of calculated data has gotten larger with improvement of supercomputer performance, reduction of visualization processing time as well as efficient use of JAEA network is being required. As a solution, we introduced a remote visualization system which has abilities to utilize parallel processors on the supercomputer and to reduce the usage of network resources by transferring data of intermediate visualization process. This paper reports a study on the performance of image processing with the remote visualization system. The visualization processing time is measured and the influence of network speed is evaluated by varying the drawing mode, the size of visualization data and the number of processors. Based on this study, a guideline for using the remote visualization system is provided to show how the system can be used effectively. An upgrade policy of the next system is also shown. (author)

  7. Accessing Wind Tunnels From NASA's Information Power Grid

    Science.gov (United States)

    Becker, Jeff; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The NASA Ames wind tunnel customers are one of the first users of the Information Power Grid (IPG) storage system at the NASA Advanced Supercomputing Division. We wanted to be able to store their data on the IPG so that it could be accessed remotely in a secure but timely fashion. In addition, incorporation into the IPG allows future use of grid computational resources, e.g., for post-processing of data, or to do side-by-side CFD validation. In this paper, we describe the integration of grid data access mechanisms with the existing DARWIN web-based system that is used to access wind tunnel test data. We also show that the combined system has reasonable performance: wind tunnel data may be retrieved at 50Mbits/s over a 100 base T network connected to the IPG storage server.

  8. CONHOR. Code system for determination of power distribution and burnup for the HOR reactor. Version 1.0.. User`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Serov, I V; Hoogenboom, J E

    1993-07-01

    The main calculational tool is the CITATION code. CITATION is used for both static and burnup calculations. The pointwise flux density and power distributions obtained from these calculations are used to obtain the values of the desired quantities at the beginning of a burnup cycle. To obtain the most trustful values of the desired quantities CONHOR employs experimental information together with the CITATION calculated flux distributions. Axially averaged foil activation rates are obtained based on both CITATION pointwise flux density distributions and measured foil activity counts. These two sets of activation rates are called the distributions of auxiliary quantities and are compared with each other in order to pick up the corrections to the U-235 number densities in fuel containing elements. The methodical corrections to the calculational auxiliary quantities are obtained on this basis as well. They are used to obtain the methodical corrections to the desired quantities. The corrected desired quantities are the recommended ones. The correction procedure requires the knowledge of the sensitivity coefficients of the average foil activation rates with respect to the U-235 number densities (through the text of this manual U-235 is denoted also and especially in the input-output description sections as a BUrning-COrrected material, or `BuCo` material). These sensitivity coefficients are calculated by the CONHOR SENS module. CITATION is employed to perform the calculations with perturbed values of U-235 number densities. Burnup calculations can be performed being based on either corrected or uncorrected U-235 number densities. Through the text of this manual XXXX means a 4-symbol identification of the burnup cycle to be studied. XX-1 and XX+1 mean correspondingly the previous and the following cycles. (orig./HP).

  9. The ASCI Network for SC '99: A Step on the Path to a 100 Gigabit Per Second Supercomputing Network

    Energy Technology Data Exchange (ETDEWEB)

    PRATT,THOMAS J.; TARMAN,THOMAS D.; MARTINEZ,LUIS M.; MILLER,MARC M.; ADAMS,ROGER L.; CHEN,HELEN Y.; BRANDT,JAMES M.; WYCKOFF,PETER S.

    2000-07-24

    This document highlights the Discom{sup 2}'s Distance computing and communication team activities at the 1999 Supercomputing conference in Portland, Oregon. This conference is sponsored by the IEEE and ACM. Sandia, Lawrence Livermore and Los Alamos National laboratories have participated in this conference for eleven years. For the last four years the three laboratories have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives rubric. Communication support for the ASCI exhibit is provided by the ASCI DISCOM{sup 2} project. The DISCOM{sup 2} communication team uses this forum to demonstrate and focus communication and networking developments within the community. At SC 99, DISCOM built a prototype of the next generation ASCI network demonstrated remote clustering techniques, demonstrated the capabilities of the emerging Terabit Routers products, demonstrated the latest technologies for delivering visualization data to the scientific users, and demonstrated the latest in encryption methods including IP VPN technologies and ATM encryption research. The authors also coordinated the other production networking activities within the booth and between their demonstration partners on the exhibit floor. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia's overall strategies in ASCI networking.

  10. Combining density functional theory calculations, supercomputing, and data-driven methods to design new materials (Conference Presentation)

    Science.gov (United States)

    Jain, Anubhav

    2017-04-01

    Density functional theory (DFT) simulations solve for the electronic structure of materials starting from the Schrödinger equation. Many case studies have now demonstrated that researchers can often use DFT to design new compounds in the computer (e.g., for batteries, catalysts, and hydrogen storage) before synthesis and characterization in the lab. In this talk, I will focus on how DFT calculations can be executed on large supercomputing resources in order to generate very large data sets on new materials for functional applications. First, I will briefly describe the Materials Project, an effort at LBNL that has virtually characterized over 60,000 materials using DFT and has shared the results with over 17,000 registered users. Next, I will talk about how such data can help discover new materials, describing how preliminary computational screening led to the identification and confirmation of a new family of bulk AMX2 thermoelectric compounds with measured zT reaching 0.8. I will outline future plans for how such data-driven methods can be used to better understand the factors that control thermoelectric behavior, e.g., for the rational design of electronic band structures, in ways that are different from conventional approaches.

  11. Documentation of and satisfaction with the service delivery process of electric powered scooters among adult users in different national contexts

    DEFF Research Database (Denmark)

    Sund, Terje; Iwarsson, Susanne; Andersen, Mette C

    2013-01-01

    -up design based on a consecutive inclusion of 50 Danish and 86 Norwegian adults as they were about to be provided a scooter. A study-specific structured questionnaire for documentation of the SDP was administered. The Satisfaction with Assistive Technology Services was used for documenting user satisfaction...

  12. Service user involvement enhanced the research quality in a study using interpretative phenomenological analysis - the power of multiple perspectives.

    Science.gov (United States)

    Mjøsund, Nina Helen; Eriksson, Monica; Espnes, Geir Arild; Haaland-Øverby, Mette; Jensen, Sven Liang; Norheim, Irene; Kjus, Solveig Helene Høymork; Portaasen, Inger-Lill; Vinje, Hege Forbech

    2017-01-01

    The aim of this study was to examine how service user involvement can contribute to the development of interpretative phenomenological analysis methodology and enhance research quality. Interpretative phenomenological analysis is a qualitative methodology used in nursing research internationally to understand human experiences that are essential to the participants. Service user involvement is requested in nursing research. We share experiences from 4 years of collaboration (2012-2015) on a mental health promotion project, which involved an advisory team. Five research advisors either with a diagnosis or related to a person with severe mental illness constituted the team. They collaborated with the research fellow throughout the entire research process and have co-authored this article. We examined the joint process of analysing the empirical data from interviews. Our analytical discussions were audiotaped, transcribed and subsequently interpreted following the guidelines for good qualitative analysis in interpretative phenomenological analysis studies. The advisory team became 'the researcher's helping hand'. Multiple perspectives influenced the qualitative analysis, which gave more insightful interpretations of nuances, complexity, richness or ambiguity in the interviewed participants' accounts. The outcome of the service user involvement was increased breadth and depth in findings. Service user involvement improved the research quality in a nursing research project on mental health promotion. The interpretative element of interpretative phenomenological analysis was enhanced by the emergence of multiple perspectives in the qualitative analysis of the empirical data. We argue that service user involvement and interpretative phenomenological analysis methodology can mutually reinforce each other and strengthen qualitative methodology. © 2016 The Authors. Journal of Advanced Nursing Published by John Wiley & Sons Ltd.

  13. Simulation and experimental studies of operators' decision styles and crew composition while using an ecological and traditional user interface for the control room of a nuclear power plant

    International Nuclear Information System (INIS)

    Meshkati, N.; Buller, B.J.; Azadeh, M.A.

    1994-01-01

    A traditional human factors (i.e., microergonomic) approach to complex human-machine systems is only concerned with improving the workstation (user interface) design. This approach, by ignoring the importance of the integration of the user interface with job and organizational design, results in systems which lead, at best, only to sub-optimization and are therefore inherently error- and failure-prone. Such systems, when eventually faced with the concentration of certain fault events, will suffer from this open-quotes resident pathogenclose quotes and, as such, are doomed to failure. Also, when complex technological systems, such as nuclear power plants, move from routine to non-routine (normal to emergency) operation, the controlling operators need to dynamically match the system's new requirements. This mandates integrated and harmonious changes in information presentation (display), changes in (job) performance requirements in part because of operators' inevitable involuntary transition to different levels of cognitive control, and reconfigurations of the operators' team (organizational) structure and communication. It is also demonstrated that the skill, rule, and knowledge (SRK) model, developed by Rasmussen, is a high-potential and powerful framework that could be utilized for the proposed integration purpose. The objective of this research was threefold: (1) using the SRK model, to develop an integrated information processing conceptual framework (for integration of workstation, job, and team design); (2) to evaluate the user interface component of this framework -- the ecological display; and (3) to analyze the effect of operators' individual information processing behavior and decision styles on handling plant disturbances, on their performance and preference for traditional and ecological user interfaces

  14. Efficient multitasking of Choleski matrix factorization on CRAY supercomputers

    Science.gov (United States)

    Overman, Andrea L.; Poole, Eugene L.

    1991-01-01

    A Choleski method is described and used to solve linear systems of equations that arise in large scale structural analysis. The method uses a novel variable-band storage scheme and is structured to exploit fast local memory caches while minimizing data access delays between main memory and vector registers. Several parallel implementations of this method are described for the CRAY-2 and CRAY Y-MP computers demonstrating the use of microtasking and autotasking directives. A portable parallel language, FORCE, is used for comparison with the microtasked and autotasked implementations. Results are presented comparing the matrix factorization times for three representative structural analysis problems from runs made in both dedicated and multi-user modes on both computers. CPU and wall clock timings are given for the parallel implementations and are compared to single processor timings of the same algorithm.

  15. Device-Training for Individuals with Thoracic and Lumbar Spinal Cord Injury Using a Powered Exoskeleton for Technically Assisted Mobility: Achievements and User Satisfaction

    Science.gov (United States)

    Gillner, Annett; Borgwaldt, Nicole; Kroll, Sylvia; Roschka, Sybille

    2016-01-01

    Objective. Results of a device-training for nonambulatory individuals with thoracic and lumbar spinal cord injury (SCI) using a powered exoskeleton for technically assisted mobility with regard to the achieved level of control of the system after training, user satisfaction, and effects on quality of life (QoL). Methods. Observational single centre study with a 4-week to 5-week intensive inpatient device-training using a powered exoskeleton (ReWalk™). Results. All 7 individuals with SCI who commenced the device-training completed the course of training and achieved basic competences to use the system, that is, the ability to stand up, sit down, keep balance while standing, and walk indoors, at least with a close contact guard. User satisfaction with the system and device-training was documented for several aspects. The quality of life evaluation (SF-12v2™) indicated that the use of the powered exoskeleton can have positive effects on the perception of individuals with SCI regarding what they can achieve physically. Few adverse events were observed: minor skin lesions and irritations were observed; no falls occurred. Conclusions. The device-training for individuals with thoracic and lumbar SCI was effective and safe. All trained individuals achieved technically assisted mobility with the exoskeleton while still needing a close contact guard. PMID:27610382

  16. Device-Training for Individuals with Thoracic and Lumbar Spinal Cord Injury Using a Powered Exoskeleton for Technically Assisted Mobility: Achievements and User Satisfaction

    Directory of Open Access Journals (Sweden)

    Thomas Platz

    2016-01-01

    Full Text Available Objective. Results of a device-training for nonambulatory individuals with thoracic and lumbar spinal cord injury (SCI using a powered exoskeleton for technically assisted mobility with regard to the achieved level of control of the system after training, user satisfaction, and effects on quality of life (QoL. Methods. Observational single centre study with a 4-week to 5-week intensive inpatient device-training using a powered exoskeleton (ReWalk™. Results. All 7 individuals with SCI who commenced the device-training completed the course of training and achieved basic competences to use the system, that is, the ability to stand up, sit down, keep balance while standing, and walk indoors, at least with a close contact guard. User satisfaction with the system and device-training was documented for several aspects. The quality of life evaluation (SF-12v2™ indicated that the use of the powered exoskeleton can have positive effects on the perception of individuals with SCI regarding what they can achieve physically. Few adverse events were observed: minor skin lesions and irritations were observed; no falls occurred. Conclusions. The device-training for individuals with thoracic and lumbar SCI was effective and safe. All trained individuals achieved technically assisted mobility with the exoskeleton while still needing a close contact guard.

  17. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Science.gov (United States)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  18. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    International Nuclear Information System (INIS)

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; De, K; Oleynik, D; Jha, S; Wells, J

    2016-01-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  19. Scalable geocomputation: evolving an environmental model building platform from single-core to supercomputers

    Science.gov (United States)

    Schmitz, Oliver; de Jong, Kor; Karssenberg, Derek

    2017-04-01

    There is an increasing demand to run environmental models on a big scale: simulations over large areas at high resolution. The heterogeneity of available computing hardware such as multi-core CPUs, GPUs or supercomputer potentially provides significant computing power to fulfil this demand. However, this requires detailed knowledge of the underlying hardware, parallel algorithm design and the implementation thereof in an efficient system programming language. Domain scientists such as hydrologists or ecologists often lack this specific software engineering knowledge, their emphasis is (and should be) on exploratory building and analysis of simulation models. As a result, models constructed by domain specialists mostly do not take full advantage of the available hardware. A promising solution is to separate the model building activity from software engineering by offering domain specialists a model building framework with pre-programmed building blocks that they combine to construct a model. The model building framework, consequently, needs to have built-in capabilities to make full usage of the available hardware. Developing such a framework providing understandable code for domain scientists and being runtime efficient at the same time poses several challenges on developers of such a framework. For example, optimisations can be performed on individual operations or the whole model, or tasks need to be generated for a well-balanced execution without explicitly knowing the complexity of the domain problem provided by the modeller. Ideally, a modelling framework supports the optimal use of available hardware whichsoever combination of model building blocks scientists use. We demonstrate our ongoing work on developing parallel algorithms for spatio-temporal modelling and demonstrate 1) PCRaster, an environmental software framework (http://www.pcraster.eu) providing spatio-temporal model building blocks and 2) parallelisation of about 50 of these building blocks using

  20. Systemwide Power Management with Argo

    Energy Technology Data Exchange (ETDEWEB)

    Ellsworth, Daniel; Patki, Tapasya; Perarnau, Swann; Seo, Sangmin; Yoshii, Kazutomo; Hoffmann, Henry; Schulz, Martin; Beckman, Pete

    2016-05-23

    The Argo project is a DOE initiative for designing a modular operating system/runtime for the next generation of supercomputers. A key focus area in this project is power management, which is one of the main challenges on the path to exascale. In this paper, we discuss ideas for systemwide power management in the Argo project. We present a hierarchical and scalable approach to maintain a power bound at scale, and we highlight some early results.

  1. End user reliability assessment of 1.2-1.7 kV commercial SiC MOSFET power modules

    DEFF Research Database (Denmark)

    Ionita, Claudiu; Nawaz, Muhammad

    2017-01-01

    This paper is a first attempt to offer reliability evaluation of full SiC power modules where several dies are connected in parallel to increase power rating capability. Here, five different power modules with voltage rating from 1.2-1.7 kV and current rating from 120-800 A from three vendors hav......, which is connected in parallel with the MOSFET chip. For another module, there has also been recorded a failure of the gate oxide during H3TRB....

  2. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  3. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    International Nuclear Information System (INIS)

    Cabrillo, I; Cabellos, L; Marco, J; Fernandez, J; Gonzalez, I

    2014-01-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  4. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    Science.gov (United States)

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  5. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  6. Heat dissipation computations of a HVDC ground electrode using a supercomputer

    International Nuclear Information System (INIS)

    Greiss, H.; Mukhedkar, D.; Lagace, P.J.

    1990-01-01

    This paper reports on the temperature, of soil surrounding a High Voltage Direct Current (HVDC) toroidal ground electrode of practical dimensions, in both homogeneous and non-homogeneous soils that was computed at incremental points in time using finite difference methods on a supercomputer. Curves of the response were computed and plotted at several locations within the soil in the vicinity of the ground electrode for various values of the soil parameters

  7. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  8. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Mashinistov, Ruslan; Belyaev, Nikita; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at high occupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualisation tools and more. WLCG...

  9. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Belyaev, Nikita; Mashinistov, Ruslan; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at highoccupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualization tools and more. WLCG ...

  10. Differences in participation based on self-esteem in power and manual wheelchair users on a university campus: a pilot study.

    Science.gov (United States)

    Rice, Ian M; Wong, Alex W K; Salentine, Benjamin A; Rice, Laura A

    2015-03-01

    To examine the relationship of self-esteem and wheelchair type with participation of young adult manual and power wheelchair users with diverse physical disabilities. Cross-sectional survey study. Large University Campus. A convenience sample of college students (N = 39) with self-reported physical disabilities who are full time wheelchair users (>40 per week) and are two or more years post illness or injury. Not applicable. The Rosenberg Self-Esteem Scale was used to measure self-esteem, and the Craig Handicap Assessment and Reporting Technique was used to measure participation. Self-esteem correlated highly with cognitive independence (CI) (r = 0.58), mobility (r = 0.67) and social integration (SI) (r = 0.52). Use of manual wheelchair was significantly related to higher levels of CI and mobility while longer use of any wheelchair (power or manual) was significantly associated with higher levels of mobility and SI. In addition higher self-esteem independently predicted a significant proportion of the variance in CI, mobility and SI, while type of wheelchair predicted a significant proportion of the variance in CI (p self-esteem was found to be the strongest predictor of participation in a population of young adults with mobility limitations. Better understanding of the factors influencing participation may help to facilitate new interventions to minimize the disparities between persons with disabilities and their able bodied peers. Implication for Rehabilitation A total of 46.8% of wheelchair users report the desire for increased community participant but face significant barriers. The type of wheelchair has been identified as having a large impact on participation. This study found self-esteem to be the strongest predictor of participation, which is notable because self-esteem is a characteristic that is potentially modifiable with treatment.

  11. Effect of power-assisted hand-rim wheelchair propulsion on shoulder load in experienced wheelchair users: A pilot study with an instrumented wheelchair.

    Science.gov (United States)

    Kloosterman, Marieke G M; Buurke, Jaap H; de Vries, Wiebe; Van der Woude, Lucas H V; Rietman, Johan S

    2015-10-01

    This study aims to compare hand-rim and power-assisted hand-rim propulsion on potential risk factors for shoulder overuse injuries: intensity and repetition of shoulder loading and force generation in the extremes of shoulder motion. Eleven experienced hand-rim wheelchair users propelled an instrumented wheelchair on a treadmill while upper-extremity kinematic, kinetic and surface electromyographical data was collected during propulsion with and without power-assist. As a result during power-assisted propulsion the peak resultant force exerted at the hand-rim decreased and was performed with significantly less abduction and internal rotation at the shoulder. At shoulder level the anterior directed force and internal rotation and flexion moments decreased significantly. In addition, posterior and the minimal inferior directed forces and the external rotation moment significantly increased. The stroke angle decreased significantly, as did maximum shoulder flexion, extension, abduction and internal rotation. Stroke-frequency significantly increased. Muscle activation in the anterior deltoid and pectoralis major also decreased significantly. In conclusion, compared to hand-rim propulsion power-assisted propulsion seems effective in reducing potential risk factors of overuse injuries with the highest gain on decreased range of motion of the shoulder joint, lower peak propulsion force on the rim and reduced muscle activity. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  12. Wien Automatic System Planning (WASP) Package. A computer code for power generating system expansion planning. Version WASP-IV. User's manual

    International Nuclear Information System (INIS)

    2001-01-01

    As a continuation of its efforts to provide methodologies and tools to Member States to carry out comparative assessment and analyse priority environmental issues related to the development of the electric power sector, the IAEA has completed a new version of the Wien Automatic System Planning (WASP) Package WASP-IV for carrying out power generation expansion planning taking into consideration fuel availability and environmental constraints. This manual constitutes a part of this work and aims to provide users with a guide to use effectively the new version of the model WASP-IV. WASP was originally developed in 1972 by the Tennessee Valley Authority and the Oak Ridge National Laboratory in the USA to meet the IAEA needs to analyse the economic competitiveness of nuclear power in comparison to other generation expansion alternatives for supplying the future electricity requirements of a country or region. Previous versions of the model were used by Member States in many national and regional studies to analyse the electric power system expansion planning and the role of nuclear energy in particular. Experience gained from its application allowed development of WASP into a very comprehensive planning tool for electric power system expansion analysis. New, improved versions were developed, which took into consideration the needs expressed by the users of the programme in order to address important emerging issues being faced by the electric system planners. In 1979, WASP-IV was released and soon after became an indispensable tool in many Member States for generation expansion planning. The WASP-IV version was continually upgraded and the development of version WASP-III Plus commenced in 1992. By 1995, WASP-III Plus was completed, which followed closely the methodology of the WASP-III but incorporated new features. In order to meet the needs of electricity planners and following the recommendations of the Helsinki symposium, development of a new version of WASP was

  13. Getting To Exascale: Applying Novel Parallel Programming Models To Lab Applications For The Next Generation Of Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Dube, Evi [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shereda, Charles [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nau, Lee [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Harris, Lance [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2010-09-27

    As supercomputing moves toward exascale, node architectures will change significantly. CPU core counts on nodes will increase by an order of magnitude or more. Heterogeneous architectures will become more commonplace, with GPUs or FPGAs providing additional computational power. Novel programming models may make better use of on-node parallelism in these new architectures than do current models. In this paper we examine several of these novel models – UPC, CUDA, and OpenCL –to determine their suitability to LLNL scientific application codes. Our study consisted of several phases: We conducted interviews with code teams and selected two codes to port; We learned how to program in the new models and ported the codes; We debugged and tuned the ported applications; We measured results, and documented our findings. We conclude that UPC is a challenge for porting code, Berkeley UPC is not very robust, and UPC is not suitable as a general alternative to OpenMP for a number of reasons. CUDA is well supported and robust but is a proprietary NVIDIA standard, while OpenCL is an open standard. Both are well suited to a specific set of application problems that can be run on GPUs, but some problems are not suited to GPUs. Further study of the landscape of novel models is recommended.

  14. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk (Kitware, Inc., Clifton Park, NY); Fabian, Nathan; Marion, Patrick (Kitware, Inc., Clifton Park, NY); Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  15. Application of supercomputers to 3-D mantle convection

    International Nuclear Information System (INIS)

    Baumgardner, J.R.

    1986-01-01

    Current generation vector machines are providing for the first time the computing power needed to treat planetary mantle convection in a fully three-dimensional fashion. A numerical technique known as multigrid has been implemented in spherical geometry using a hierarchy of meshes constructed from the regular icosahedron to yield a highly efficient three-dimensional compressible Eulerian finite element hydrodynamics formulation. The paper describes the numerical method and presents convection solutions for the mantles of both the earth and the Moon. In the case of the Earth, the convection pattern is characterized by upwelling in narrow circular plumes originating at the core-mantle boundary and by downwelling in sheets or slabs derived from the cold upper boundary layer. The preferred number of plumes appears to be on the order of six or seven. For the Moon, the numerical results indicate that development of a predominately L = 2 pattern in later lunar history is a plausible explanation for the present large second-degree non-hydrostatic component in the lunar figure

  16. Supercomputing with toys: harnessing the power of NVIDIA 8800GTX and playstation 3 for bioinformatics problem.

    Science.gov (United States)

    Wilson, Justin; Dai, Manhong; Jakupovic, Elvis; Watson, Stanley; Meng, Fan

    2007-01-01

    Modern video cards and game consoles typically have much better performance to price ratios than that of general purpose CPUs. The parallel processing capabilities of game hardware are well-suited for high throughput biomedical data analysis. Our initial results suggest that game hardware is a cost-effective platform for some computationally demanding bioinformatics problems.

  17. Computer code and users' guide for the preliminary analysis of dual-mode space nuclear fission solid core power and propulsion systems, NUROC3A. AMS report No. 1239b

    Energy Technology Data Exchange (ETDEWEB)

    Nichols, R.A.; Smith, W.W.

    1976-06-30

    The three-volume report describes a dual-mode nuclear space power and propulsion system concept that employs an advanced solid-core nuclear fission reactor coupled via heat pipes to one of several electric power conversion systems. The second volume describes the computer code and users' guide for the preliminary analysis of the system.

  18. User's manual for the BNW-II optimization code for dry/wet-cooled power plants

    Energy Technology Data Exchange (ETDEWEB)

    Braun, D.J.; Bamberger, J.A.; Braun, D.J.; Faletti, D.W.; Wiles, L.E.

    1978-05-01

    The User's Manual describes how to operate BNW-II, a computer code developed by the Pacific Northwest Laboratory (PNL) as a part of its activities under the Department of Energy (DOE) Dry Cooling Enhancement Program. The computer program offers a comprehensive method of evaluating the cost savings potential of dry/wet-cooled heat rejection systems. Going beyond simple ''figure-of-merit'' cooling tower optimization, this method includes such items as the cost of annual replacement capacity, and the optimum split between plant scale-up and replacement capacity, as well as the purchase and operating costs of all major heat rejection components. Hence the BNW-II code is a useful tool for determining potential cost savings of new dry/wet surfaces, new piping, or other components as part of an optimized system for a dry/wet-cooled plant.

  19. MaMiCo: Transient multi-instance molecular-continuum flow simulation on supercomputers

    Science.gov (United States)

    Neumann, Philipp; Bian, Xin

    2017-11-01

    We present extensions of the macro-micro-coupling tool MaMiCo, which was designed to couple continuum fluid dynamics solvers with discrete particle dynamics. To enable local extraction of smooth flow field quantities especially on rather short time scales, sampling over an ensemble of molecular dynamics simulations is introduced. We provide details on these extensions including the transient coupling algorithm, open boundary forcing, and multi-instance sampling. Furthermore, we validate the coupling in Couette flow using different particle simulation software packages and particle models, i.e. molecular dynamics and dissipative particle dynamics. Finally, we demonstrate the parallel scalability of the molecular-continuum simulations by using up to 65 536 compute cores of the supercomputer Shaheen II located at KAUST. Program Files doi:http://dx.doi.org/10.17632/w7rgdrhb85.1 Licensing provisions: BSD 3-clause Programming language: C, C++ External routines/libraries: For compiling: SCons, MPI (optional) Subprograms used: ESPResSo, LAMMPS, ls1 mardyn, waLBerla For installation procedures of the MaMiCo interfaces, see the README files in the respective code directories located in coupling/interface/impl. Journal reference of previous version: P. Neumann, H. Flohr, R. Arora, P. Jarmatz, N. Tchipev, H.-J. Bungartz. MaMiCo: Software design for parallel molecular-continuum flow simulations, Computer Physics Communications 200: 324-335, 2016 Does the new version supersede the previous version?: Yes. The functionality of the previous version is completely retained in the new version. Nature of problem: Coupled molecular-continuum simulation for multi-resolution fluid dynamics: parts of the domain are resolved by molecular dynamics or another particle-based solver whereas large parts are covered by a mesh-based CFD solver, e.g. a lattice Boltzmann automaton. Solution method: We couple existing MD and CFD solvers via MaMiCo (macro-micro coupling tool). Data exchange and

  20. A Study on User-Centered Approach to Design an Augmented Reality Maintenance Support System in Nuclear Power Plant

    International Nuclear Information System (INIS)

    Koo, Jwa Jin; Seong, Poong Hyun

    2007-01-01

    In nuclear power plants (NPPs), as the plants become more reliable and complex, their inspection, maintenance and repair become increasingly challenging problems, and this requires many well experienced and well trained maintenance crew. On the other hands, reduction of life cycle costs of the plants is strongly required, and many crews are required to take charge of various kinds of devices, including their unfamiliar ones. Their task must be done under the strong time pressure of rigid maintenance schedule. This may cause human errors even by even the well experienced crews. Maintenance processes are both very important to guarantee quality for safety and often quite cumbersome. In the case of nuclear power plants, such processes usually demand access to documentation such as technical manuals, either in traditional paper form or electronic form. This is especially important where and when the procedures are performed infrequently. These considerations lead to considering Augmented Reality (AR) systems as an alternative to paper-based systems

  1. Preparation of radiological effluent technical specifications for nuclear power plants. a guidance manual for users of standard technical specifications

    International Nuclear Information System (INIS)

    Boegli, J.S.; Bellamy, R.R.; Britz, W.L.; Waterfield, R.L.

    1978-10-01

    The purpose of this manual is to describe methods found acceptable to the staff of the U.S. Nuclear Regulatory Commission (NRC) for the calculation of certain key values required in the preparation of proposed radiological effluent Technical Specifications using the Standard Technical Specifications for light-water-cooled nuclear power plants. This manual also provides guidance to applicants for operating licenses for nuclear power plants in the preparation of proposed radiological effluent Technical Specifications or in preparing requests for changes to existing radiological effluent Technical Specifications for operating licenses. The manual additionally describes current staff positions on the methodology for estimating radiation exposure due to the release of radioactive materials in effluents and on the administrative control of radioactive waste treatment systems

  2. Power Users and Patchworking – an Analytical Approach to Critical Studies of Young People’s Learning with Digital Media

    DEFF Research Database (Denmark)

    Ryberg, Thomas; Dirckinck-Holmfeld, Lone

    2008-01-01

    This paper sets out to problematize generational categories such as ‘Power Users’ or ‘New Millennium Learners’ by discussing these in the light of recent research on youth and ICT. We then suggest analytic and conceptual pathways to engage in more critical and empirically founded studies of young...... people’s learning in technology and media-rich settings. Based on a study of a group of young ‘Power Users’ it is argued, that conceptualising and analysing learning as a process of patchworking can enhance our knowledge of young people’s learning in such settings. We argue that the analytical approach...... gives us ways of critically investigating young people’s learning in technology and media-rich settings, and study if these are processes of critical, reflexive enquiry where resources are creatively re-appropriated. With departure in an analytical example the paper presents the proposed metaphor...

  3. Early Experiences with Node-Level Power Capping on the Cray XC40 Platform

    Energy Technology Data Exchange (ETDEWEB)

    Pedretti, Kevin; Olivier, Stephen Lecler; Ferreira, Kurt Brian; Shipman, Galen; Shu, Wei

    2015-10-01

    Power consumption of extreme-scale supercomputers has become a key performance bottleneck. Yet current practices do not leverage power management opportunities, instead running at ''maximum power''. This is not sustainable. Future systems will need to manage power as a critical resource, directing where it has greatest benefit. Power capping is one mechanism for managing power budgets, however its behavior is not well understood. This paper presents an empirical evaluation of several key HPC workloads running under a power cap on a Cray XC40 system, and provides a comparison of this technique with p-state control, demonstrating the performance differences of each. These results show: 1. Maximum performance requires ensuring the cap is not reached; 2. Performance slowdown under a cap can be attributed to cascading delays which result in unsynchronized performance variability across nodes; and, 3. Due to lag in reaction time, considerable time is spent operating above the set cap. This work provides a timely and much needed comparison of HPC application performance under a power cap and attempts to enable users and system administrators to understand how to best optimize application performance on power-constrained HPC systems.

  4. Wien Automatic System Planning (WASP) Package. A computer code for power generating system expansion planning. Version WASP-III Plus. User's manual. Volume 1: Chapters 1-11

    International Nuclear Information System (INIS)

    1995-01-01

    determination of the optimal expansion of combined thermal and hydro power systems, taking into account the optimal operation of the hydro reservoirs throughout the year. Microcomputer (PC) versions of WASP-Ill and MAED have also been developed as stand alone programs and as part of an integrated package for energy and electricity planning called ENPEP (Energy and Power Evaluation Program). A PC version of the VALORAGUA model has also been completed in 1992. With all these developments, the catalogue of planning methodologies offered by the IAEA to its Member States has been upgraded to facilitate the work by electricity planners, WASP in particular is currently accepted as a powerful tool for electric system expansion planning. Nevertheless, experienced users of the program have indicated the need to introduce more enhancements within the WASP model in order to cope with the problems constantly faced by planners owing to the increasing complexity of this type of analysis. With several Member States, the IAEA has completed a new version of the WASP program, which has been called WASP-Ill Plus since it follows quite closely the methodology of the WASP-Ill model. The major enhancements in WASP-Ill Plus with respect to the WASP-Ill version are: increase in the number of thermal fuel types (from 5 to 10); verification of which configurations generated by CONGEN have already been simulated in previous iterations with MERSIM; direct calculation of combined Loading Order of FIXSYS and VARSYS plants; simulation of system operation includes consideration of physical constraints imposed on some fuel types (i.e., fuel availability for electricity generation); extended output of the resimulation of the optimal solution; generation of a file that can be used for graphical representation of the results of the resimulation of the optimal solution and cash flows of the investment costs; calculation of cash flows allows to include the capital costs of plants firmly committed or in construction

  5. Simulation and experimental studies of operators' decision styles and crew composition while using an ecological and traditional user interface for the control room of a nuclear power plant

    International Nuclear Information System (INIS)

    Meshkati, N.; Buller, B.J.; Azadeh, M.A.

    1995-01-01

    The goal of this research is threefold: (1) use of the Skill-, Rule-, and Knowledge-based levels of cognitive control -- the SRK framework -- to develop an integrated information processing conceptual framework (for integration of workstation, job, and team design); (2) to evaluate the user interface component of this framework -- the Ecological display; and (3) to analyze the effect of operators' individual information processing behavior and decision styles on handling plant disturbances plus their performance on, and preference for, Traditional and Ecological user interfaces. A series of studies were conducted. In Part I, a computer simulation model and a mathematical model were developed. In Part II, an experiment was designed and conducted at the EBR-II plant of the Argonne National Laboratory-West in Idaho Falls, Idaho. It is concluded that: the integrated SRK-based information processing model for control room operations is superior to the conventional rule-based model; operators' individual decision styles and the combination of their styles play a significant role in effective handling of nuclear power plant disturbances; use of the Ecological interface results in significantly more accurate event diagnosis and recall of various plant parameters, faster response to plant transients, and higher ratings of subject preference; and operators' decision styles affect on both their performance and preference for the Ecological interface

  6. Simulation and experimental studies of operators` decision styles and crew composition while using an ecological and traditional user interface for the control room of a nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Meshkati, N.; Buller, B.J.; Azadeh, M.A. [Univ. of Southern California, Los Angeles, CA (United States)

    1995-04-01

    The goal of this research is threefold: (1) use of the Skill-, Rule-, and Knowledge-based levels of cognitive control -- the SRK framework -- to develop an integrated information processing conceptual framework (for integration of workstation, job, and team design); (2) to evaluate the user interface component of this framework -- the Ecological display; and (3) to analyze the effect of operators` individual information processing behavior and decision styles on handling plant disturbances plus their performance on, and preference for, Traditional and Ecological user interfaces. A series of studies were conducted. In Part I, a computer simulation model and a mathematical model were developed. In Part II, an experiment was designed and conducted at the EBR-II plant of the Argonne National Laboratory-West in Idaho Falls, Idaho. It is concluded that: the integrated SRK-based information processing model for control room operations is superior to the conventional rule-based model; operators` individual decision styles and the combination of their styles play a significant role in effective handling of nuclear power plant disturbances; use of the Ecological interface results in significantly more accurate event diagnosis and recall of various plant parameters, faster response to plant transients, and higher ratings of subject preference; and operators` decision styles affect on both their performance and preference for the Ecological interface.

  7. Interference from the Deep Space Network's 70-m High Power Transmitter in Goldstone, CA to 3G Mobile Users Operating in the Surrounding Area

    Science.gov (United States)

    Ho, Christian

    2004-01-01

    The International Telecommunications Union (ITU) has allocated 2110-2200 MHz for the third generation (3G) mobile services. Part of the spectrum (2110-2120 MHz) is allocated for space research service and has been used by the DSN for years for sending command uplinks to deep space missions. Due to the extremely high power transmitted, potential interference to 3G users in areas surrounding DSN Goldstone exists. To address this issue, a preliminary analytical study has been performed and computer models have been developed. The goal is to provide theoretical foundation and tools to estimate the strength of interference as a function of distance from the transmitter for various interference mechanisms, (or propagation modes), and then determine the size of the area in which 3G users are susceptible to interference from the 400-kW transmitter in Goldstone. The focus is non-line-of-sight interference, taking into account of terrain shielding, anomalous propagation mechanisms, and technical and operational characteristics of the DSN and the 3G services.

  8. Databases on safety issues for WWER and RBMK reactors. Users' manual. A publication of the extrabudgetary programme on the safety of WWER and RBMK nuclear power plants

    International Nuclear Information System (INIS)

    1996-04-01

    At the beginning of the IAEA Extrabudgetary Programme on the safety of WWER reactors a great number of findings and recommendations (safety items) were collected as a result of design review and safety review missions of the WWER-440/230 type reactors. On the basis of these findings a technical database containing more than 1300 records was established to support the consolidation of the information obtained and to help in identification of safety issues. After the scope of the WWER extrabudgetary programme was extended similar data sets were prepared for the WWER-440/213, WWER-1000 and RBMK nuclear power plants. This publication describes the structure of the databases on safety issues of WWER and RBMK NPPs, the information sources used in the databases and interrogation capabilities for users to obtain the necessary information. 14 refs, 9 figs, 5 tabs

  9. Utilization of power customers in the end user market. Analysis of the competitive relationship between the Norwegian power contracts; Utnytting av kraftkundar i sluttbrukarmarknaden. Analyser av konkurransetilhoevet mellom norske kraftavtaler

    Energy Technology Data Exchange (ETDEWEB)

    Sunde, Bjarne Bjoerkavaag

    2011-07-01

    This study deals with the competitive relationship between the Norwegian power agreements in end user markets. As expected we find clear evidence of an exploitation of locked-in customers through expensive standard variable rate agreements. One also find evidence that the extent of this utilization have increased after power providers began to use price discrimination of customers more actively. Vendors say the exploitation of locked-in customers have held out for and utilization is often seen as the biggest problem for the market. In time to come, it is not however given that exploitation of locked-customers, through expensive standard variable rate agreements, will continue to be the biggest problem with the market. Today, 60% of households are connected to the spot price contract, and such a percentage would indicate less use of customers. Electricity suppliers uses hand spot agreements without notification to exploit uncertainty customers have about competitive premiums. Agreements without notification will not be registered in this summary power to the Competition Authority and the agreements are therefore difficult to compare for customers. Today, over half of the spot price agreements without notification, and power providers achieve much greater profit on these agreements than the spot price agreements with notification.(eb)

  10. User-centered design

    International Nuclear Information System (INIS)

    Baik, Joo Hyun; Kim, Hyeong Heon

    2008-01-01

    The simplification philosophy, as an example, that both of EPRI-URD and EUR emphasize is treated mostly for the cost reduction of the nuclear power plants, but not for the simplification of the structure of user's tasks, which is one of the principles of user-centered design. A user-centered design is a philosophy based on the needs and interests of the user, with an emphasis on making products usable and understandable. However, the nuclear power plants offered these days by which the predominant reactor vendors are hardly user-centered but still designer-centered or technology-centered in viewpoint of fulfilling user requirements. The main goal of user-centered design is that user requirements are elicited correctly, reflected properly into the system requirements, and verified thoroughly by the tests. Starting from the user requirements throughout to the final test, each requirement should be traceable. That's why requirement traceability is a key to the user-centered design, and main theme of a requirement management program, which is suggested to be added into EPRI-URD and EUR in the section of Design Process. (author)

  11. Integration of PanDA workload management system with Titan supercomputer at OLCF

    Science.gov (United States)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  12. Three-dimensional kinetic simulations of whistler turbulence in solar wind on parallel supercomputers

    Science.gov (United States)

    Chang, Ouliang

    The objective of this dissertation is to study the physics of whistler turbulence evolution and its role in energy transport and dissipation in the solar wind plasmas through computational and theoretical investigations. This dissertation presents the first fully three-dimensional (3D) particle-in-cell (PIC) simulations of whistler turbulence forward cascade in a homogeneous, collisionless plasma with a uniform background magnetic field B o, and the first 3D PIC simulation of whistler turbulence with both forward and inverse cascades. Such computationally demanding research is made possible through the use of massively parallel, high performance electromagnetic PIC simulations on state-of-the-art supercomputers. Simulations are carried out to study characteristic properties of whistler turbulence under variable solar wind fluctuation amplitude (epsilon e) and electron beta (betae), relative contributions to energy dissipation and electron heating in whistler turbulence from the quasilinear scenario and the intermittency scenario, and whistler turbulence preferential cascading direction and wavevector anisotropy. The 3D simulations of whistler turbulence exhibit a forward cascade of fluctuations into broadband, anisotropic, turbulent spectrum at shorter wavelengths with wavevectors preferentially quasi-perpendicular to B o. The overall electron heating yields T ∥ > T⊥ for all epsilone and betae values, indicating the primary linear wave-particle interaction is Landau damping. But linear wave-particle interactions play a minor role in shaping the wavevector spectrum, whereas nonlinear wave-wave interactions are overall stronger and faster processes, and ultimately determine the wavevector anisotropy. Simulated magnetic energy spectra as function of wavenumber show a spectral break to steeper slopes, which scales as k⊥lambda e ≃ 1 independent of betae values, where lambdae is electron inertial length, qualitatively similar to solar wind observations. Specific

  13. Re-inventing electromagnetics - Supercomputing solution of Maxwell's equations via direct time integration on space grids

    International Nuclear Information System (INIS)

    Taflove, A.

    1992-01-01

    This paper summarizes the present state and future directions of applying finite-difference and finite-volume time-domain techniques for Maxwell's equations on supercomputers to model complex electromagnetic wave interactions with structures. Applications so far have been dominated by radar cross section technology, but by no means are limited to this area. In fact, the gains we have made place us on the threshold of being able to make tremendous contributions to non-defense electronics and optical technology. Some of the most interesting research in these commercial areas is summarized. 47 refs

  14. Watson will see you now: a supercomputer to help clinicians make informed treatment decisions.

    Science.gov (United States)

    Doyle-Lindrud, Susan

    2015-02-01

    IBM has collaborated with several cancer care providers to develop and train the IBM supercomputer Watson to help clinicians make informed treatment decisions. When a patient is seen in clinic, the oncologist can input all of the clinical information into the computer system. Watson will then review all of the data and recommend treatment options based on the latest evidence and guidelines. Once the oncologist makes the treatment decision, this information can be sent directly to the insurance company for approval. Watson has the ability to standardize care and accelerate the approval process, a benefit to the healthcare provider and the patient.

  15. Power

    DEFF Research Database (Denmark)

    Elmholdt, Claus Westergård; Fogsgaard, Morten

    2016-01-01

    and creativity suggests that when managers give people the opportunity to gain power and explicate that there is reason to be more creative, people will show a boost in creative behaviour. Moreover, this process works best in unstable power hierarchies, which implies that power is treated as a negotiable....... It is thus a central point that power is not necessarily something that breaks down and represses. On the contrary, an explicit focus on the dynamics of power in relation to creativity can be productive for the organisation. Our main focus is to elaborate the implications of this for practice and theory...

  16. CONHOR. Code system for determination of power distribution and burnup for the HOR reactor. Version 1.0.. User's manual

    International Nuclear Information System (INIS)

    Serov, I.V.; Hoogenboom, J.E.

    1993-07-01

    The main calculational tool is the CITATION code. CITATION is used for both static and burnup calculations. The pointwise flux density and power distributions obtained from these calculations are used to obtain the values of the desired quantities at the beginning of a burnup cycle. To obtain the most trustful values of the desired quantities CONHOR employs experimental information together with the CITATION calculated flux distributions. Axially averaged foil activation rates are obtained based on both CITATION pointwise flux density distributions and measured foil activity counts. These two sets of activation rates are called the distributions of auxiliary quantities and are compared with each other in order to pick up the corrections to the U-235 number densities in fuel containing elements. The methodical corrections to the calculational auxiliary quantities are obtained on this basis as well. They are used to obtain the methodical corrections to the desired quantities. The corrected desired quantities are the recommended ones. The correction procedure requires the knowledge of the sensitivity coefficients of the average foil activation rates with respect to the U-235 number densities (through the text of this manual U-235 is denoted also and especially in the input-output description sections as a BUrning-COrrected material, or 'BuCo' material). These sensitivity coefficients are calculated by the CONHOR SENS module. CITATION is employed to perform the calculations with perturbed values of U-235 number densities. Burnup calculations can be performed being based on either corrected or uncorrected U-235 number densities. Through the text of this manual XXXX means a 4-symbol identification of the burnup cycle to be studied. XX-1 and XX+1 mean correspondingly the previous and the following cycles. (orig./HP)

  17. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  18. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    Science.gov (United States)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  19. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00300320; Klimentov, Alexei; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Vaniachine, Alexandre; Wenaus, Torre; Schovancova, Jaroslava

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real time, information about unused...

  20. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration; Klimentov, Alexei; Oleynik, Danila; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently uses more than 100,000 cores at well over 100 Grid sites with a peak performance of 0.3 petaFLOPS, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real tim...

  1. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver; Wu, Kesheng

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports to slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.

  2. Unique Methodologies for Nano/Micro Manufacturing Job Training Via Desktop Supercomputer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, Clyde [Northern Illinois Univ., DeKalb, IL (United States); Karonis, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Lurio, Laurence [Northern Illinois Univ., DeKalb, IL (United States); Piot, Philippe [Northern Illinois Univ., DeKalb, IL (United States); Xiao, Zhili [Northern Illinois Univ., DeKalb, IL (United States); Glatz, Andreas [Northern Illinois Univ., DeKalb, IL (United States); Pohlman, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Hou, Minmei [Northern Illinois Univ., DeKalb, IL (United States); Demir, Veysel [Northern Illinois Univ., DeKalb, IL (United States); Song, Jie [Northern Illinois Univ., DeKalb, IL (United States); Duffin, Kirk [Northern Illinois Univ., DeKalb, IL (United States); Johns, Mitrick [Northern Illinois Univ., DeKalb, IL (United States); Sims, Thomas [Northern Illinois Univ., DeKalb, IL (United States); Yin, Yanbin [Northern Illinois Univ., DeKalb, IL (United States)

    2012-11-21

    This project establishes an initiative in high speed (Teraflop)/large-memory desktop supercomputing for modeling and simulation of dynamic processes important for energy and industrial applications. It provides a training ground for employment of current students in an emerging field with skills necessary to access the large supercomputing systems now present at DOE laboratories. It also provides a foundation for NIU faculty to quantum leap beyond their current small cluster facilities. The funding extends faculty and student capability to a new level of analytic skills with concomitant publication avenues. The components of the Hewlett Packard computer obtained by the DOE funds create a hybrid combination of a Graphics Processing System (12 GPU/Teraflops) and a Beowulf CPU system (144 CPU), the first expandable via the NIU GAEA system to ~60 Teraflops integrated with a 720 CPU Beowulf system. The software is based on access to the NVIDIA/CUDA library and the ability through MATLAB multiple licenses to create additional local programs. A number of existing programs are being transferred to the CPU Beowulf Cluster. Since the expertise necessary to create the parallel processing applications has recently been obtained at NIU, this effort for software development is in an early stage. The educational program has been initiated via formal tutorials and classroom curricula designed for the coming year. Specifically, the cost focus was on hardware acquisitions and appointment of graduate students for a wide range of applications in engineering, physics and computer science.

  3. Justine user`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S.R.

    1995-10-01

    Justine is the graphical user interface to the Los Alamos Radiation Modeling Interactive Environment (LARAMIE). It provides LARAMIE customers with a powerful, robust, easy-to-use, WYSIWYG interface that facilitates geometry construction and problem specification. It is assumed that the reader is familiar with LARAMIE, and the transport codes available, i.e., MCNPTM and DANTSYSTM. No attempt is made in this manual to describe these codes in detail. Information about LARAMIE, DANTSYS, and MCNP are available elsewhere. It i also assumed that the reader is familiar with the Unix operating system and with Motif widgets and their look and feel. However, a brief description of Motif and how one interacts with it can be found in Appendix A.

  4. Wien Automatic System Planning (WASP) Package. A computer code for power generating system expansion planning. Version WASP-III Plus. User's manual. Volume 1: Chapters 1-11

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-09-01

    determination of the optimal expansion of combined thermal and hydro power systems, taking into account the optimal operation of the hydro reservoirs throughout the year. Microcomputer (PC) versions of WASP-Ill and MAED have also been developed as stand alone programs and as part of an integrated package for energy and electricity planning called ENPEP (Energy and Power Evaluation Program). A PC version of the VALORAGUA model has also been completed in 1992. With all these developments, the catalogue of planning methodologies offered by the IAEA to its Member States has been upgraded to facilitate the work by electricity planners, WASP in particular is currently accepted as a powerful tool for electric system expansion planning. Nevertheless, experienced users of the program have indicated the need to introduce more enhancements within the WASP model in order to cope with the problems constantly faced by planners owing to the increasing complexity of this type of analysis. With several Member States, the IAEA has completed a new version of the WASP program, which has been called WASP-Ill Plus since it follows quite closely the methodology of the WASP-Ill model. The major enhancements in WASP-Ill Plus with respect to the WASP-Ill version are: increase in the number of thermal fuel types (from 5 to 10); verification of which configurations generated by CONGEN have already been simulated in previous iterations with MERSIM; direct calculation of combined Loading Order of FIXSYS and VARSYS plants; simulation of system operation includes consideration of physical constraints imposed on some fuel types (i.e., fuel availability for electricity generation); extended output of the resimulation of the optimal solution; generation of a file that can be used for graphical representation of the results of the resimulation of the optimal solution and cash flows of the investment costs; calculation of cash flows allows to include the capital costs of plants firmly committed or in construction

  5. Wavelet transform-vector quantization compression of supercomputer ocean model simulation output

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J N; Brislawn, C M

    1992-11-12

    We describe a new procedure for efficient compression of digital information for storage and transmission purposes. The algorithm involves a discrete wavelet transform subband decomposition of the data set, followed by vector quantization of the wavelet transform coefficients using application-specific vector quantizers. The new vector quantizer design procedure optimizes the assignment of both memory resources and vector dimensions to the transform subbands by minimizing an exponential rate-distortion functional subject to constraints on both overall bit-rate and encoder complexity. The wavelet-vector quantization method, which originates in digital image compression. is applicable to the compression of other multidimensional data sets possessing some degree of smoothness. In this paper we discuss the use of this technique for compressing the output of supercomputer simulations of global climate models. The data presented here comes from Semtner-Chervin global ocean models run at the National Center for Atmospheric Research and at the Los Alamos Advanced Computing Laboratory.

  6. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  7. Large scale simulations of lattice QCD thermodynamics on Columbia Parallel Supercomputers

    International Nuclear Information System (INIS)

    Ohta, Shigemi

    1989-01-01

    The Columbia Parallel Supercomputer project aims at the construction of a parallel processing, multi-gigaflop computer optimized for numerical simulations of lattice QCD. The project has three stages; 16-node, 1/4GF machine completed in April 1985, 64-node, 1GF machine completed in August 1987, and 256-node, 16GF machine now under construction. The machines all share a common architecture; a two dimensional torus formed from a rectangular array of N 1 x N 2 independent and identical processors. A processor is capable of operating in a multi-instruction multi-data mode, except for periods of synchronous interprocessor communication with its four nearest neighbors. Here the thermodynamics simulations on the two working machines are reported. (orig./HSI)

  8. Use of QUADRICS supercomputer as embedded simulator in emergency management systems

    International Nuclear Information System (INIS)

    Bove, R.; Di Costanzo, G.; Ziparo, A.

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system

  9. Reactive flow simulations in complex geometries with high-performance supercomputing

    International Nuclear Information System (INIS)

    Rehm, W.; Gerndt, M.; Jahn, W.; Vogelsang, R.; Binninger, B.; Herrmann, M.; Olivier, H.; Weber, M.

    2000-01-01

    In this paper, we report on a modern field code cluster consisting of state-of-the-art reactive Navier-Stokes- and reactive Euler solvers that has been developed on vector- and parallel supercomputers at the research center Juelich. This field code cluster is used for hydrogen safety analyses of technical systems, for example, in the field of nuclear reactor safety and conventional hydrogen demonstration plants with fuel cells. Emphasis is put on the assessment of combustion loads, which could result from slow, fast or rapid flames, including transition from deflagration to detonation. As a sample of proof tests, the special tools have been tested for specific tasks, based on the comparison of experimental and numerical results, which are in reasonable agreement. (author)

  10. Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers

    Science.gov (United States)

    Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi

    2018-03-01

    Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.

  11. Research to application: Supercomputing trends for the 90's - Opportunities for interdisciplinary computations

    International Nuclear Information System (INIS)

    Shankar, V.

    1991-01-01

    The progression of supercomputing is reviewed from the point of view of computational fluid dynamics (CFD), and multidisciplinary problems impacting the design of advanced aerospace configurations are addressed. The application of full potential and Euler equations to transonic and supersonic problems in the 70s and early 80s is outlined, along with Navier-Stokes computations widespread during the late 80s and early 90s. Multidisciplinary computations currently in progress are discussed, including CFD and aeroelastic coupling for both static and dynamic flexible computations, CFD, aeroelastic, and controls coupling for flutter suppression and active control, and the development of a computational electromagnetics technology based on CFD methods. Attention is given to computational challenges standing in a way of the concept of establishing a computational environment including many technologies. 40 refs

  12. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Science.gov (United States)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  13. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Directory of Open Access Journals (Sweden)

    DeTar Carleton

    2018-01-01

    Full Text Available With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  14. Solving sparse linear least squares problems on some supercomputers by using large dense blocks

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Ostromsky, T; Sameh, A

    1997-01-01

    technique is preferable to sparse matrix technique when the matrices are not large, because the high computational speed compensates fully the disadvantages of using more arithmetic operations and more storage. For very large matrices the computations must be organized as a sequence of tasks in each......Efficient subroutines for dense matrix computations have recently been developed and are available on many high-speed computers. On some computers the speed of many dense matrix operations is near to the peak-performance. For sparse matrices storage and operations can be saved by operating only...... and storing only nonzero elements. However, the price is a great degradation of the speed of computations on supercomputers (due to the use of indirect addresses, to the need to insert new nonzeros in the sparse storage scheme, to the lack of data locality, etc.). On many high-speed computers a dense matrix...

  15. An Optimized Parallel FDTD Topology for Challenging Electromagnetic Simulations on Supercomputers

    Directory of Open Access Journals (Sweden)

    Shugang Jiang

    2015-01-01

    Full Text Available It may not be a challenge to run a Finite-Difference Time-Domain (FDTD code for electromagnetic simulations on a supercomputer with more than 10 thousands of CPU cores; however, to make FDTD code work with the highest efficiency is a challenge. In this paper, the performance of parallel FDTD is optimized through MPI (message passing interface virtual topology, based on which a communication model is established. The general rules of optimal topology are presented according to the model. The performance of the method is tested and analyzed on three high performance computing platforms with different architectures in China. Simulations including an airplane with a 700-wavelength wingspan, and a complex microstrip antenna array with nearly 2000 elements are performed very efficiently using a maximum of 10240 CPU cores.

  16. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  17. Development of a high performance eigensolver on the peta-scale next generation supercomputer system

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Yamada, Susumu; Machida, Masahiko

    2010-01-01

    For the present supercomputer systems, a multicore and multisocket processors are necessary to build a system, and choice of interconnection is essential. In addition, for effective development of a new code, high performance, scalable, and reliable numerical software is one of the key items. ScaLAPACK and PETSc are well-known software on distributed memory parallel computer systems. It is needless to say that highly tuned software towards new architecture like many-core processors must be chosen for real computation. In this study, we present a high-performance and high-scalable eigenvalue solver towards the next-generation supercomputer system, so called 'K-computer' system. We have developed two versions, the standard version (eigen s) and enhanced performance version (eigen sx), which are developed on the T2K cluster system housed at University of Tokyo. Eigen s employs the conventional algorithms; Householder tridiagonalization, divide and conquer (DC) algorithm, and Householder back-transformation. They are carefully implemented with blocking technique and flexible two-dimensional data-distribution to reduce the overhead of memory traffic and data transfer, respectively. Eigen s performs excellently on the T2K system with 4096 cores (theoretical peak is 37.6 TFLOPS), and it shows fine performance 3.0 TFLOPS with a two hundred thousand dimensional matrix. The enhanced version, eigen sx, uses more advanced algorithms; the narrow-band reduction algorithm, DC for band matrices, and the block Householder back-transformation with WY-representation. Even though this version is still on a test stage, it shows 4.7 TFLOPS with the same dimensional matrix on eigen s. (author)

  18. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  19. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    Science.gov (United States)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  20. TRAC User's Guide

    International Nuclear Information System (INIS)

    Boyack, B.E.; Stumpf, H.; Lime, J.F.

    1985-11-01

    This guide has been prepared to assist users in applying the Transient Reactor Analysis Code (TRAC). TRAC is an advanced best-estimate systems code for analyzing transients in thermal-hydraulic systems. The code is very general. Because it is general, efforts to model specific nuclear power plants or experimental facilities often present a challenge to the TRAC user. This guide has been written to assist first-time or intermediate users. It is specifically written for the TRAC version designated TRAC-PF1/MOD1. The TRAC User's Guide should be considered a companion document to the TRAC Code Manual; the user will need both documents to use TRAC effectively. 18 refs., 45 figs., 19 tabs

  1. De Novo Ultrascale Atomistic Simulations On High-End Parallel Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, A; Kalia, R K; Nomura, K; Sharma, A; Vashishta, P; Shimojo, F; van Duin, A; Goddard, III, W A; Biswas, R; Srivastava, D; Yang, L H

    2006-09-04

    We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework, high-end chemically reactive and non-reactive molecular dynamics (MD) simulations explore a wide solution space to discover microscopic mechanisms that govern macroscopic material properties, into which highly accurate quantum mechanical (QM) simulations are embedded to validate the discovered mechanisms and quantify the uncertainty of the solution. The framework includes an embedded divide-and-conquer (EDC) algorithmic framework for the design of linear-scaling simulation algorithms with minimal bandwidth complexity and tight error control. The EDC framework also enables adaptive hierarchical simulation with automated model transitioning assisted by graph-based event tracking. A tunable hierarchical cellular decomposition parallelization framework then maps the O(N) EDC algorithms onto Petaflops computers, while achieving performance tunability through a hierarchy of parameterized cell data/computation structures, as well as its implementation using hybrid Grid remote procedure call + message passing + threads programming. High-end computing platforms such as IBM BlueGene/L, SGI Altix 3000 and the NSF TeraGrid provide an excellent test grounds for the framework. On these platforms, we have achieved unprecedented scales of quantum-mechanically accurate and well validated, chemically reactive atomistic simulations--1.06 billion-atom fast reactive force-field MD and 11.8 million-atom (1.04 trillion grid points) quantum-mechanical MD in the framework of the EDC density functional theory on adaptive multigrids--in addition to 134 billion-atom non-reactive space-time multiresolution MD, with the parallel efficiency as high as 0.998 on 65,536 dual-processor BlueGene/L nodes. We have also achieved an automated execution of hierarchical QM

  2. Understanding users

    DEFF Research Database (Denmark)

    Johannsen, Carl Gustav Viggo

    2014-01-01

    Segmentation of users can help libraries in the process of understanding user similarities and differences. Segmentation can also form the basis for selecting segments of target users and for developing tailored services for specific target segments. Several approaches and techniques have been...... tested in library contexts and the aim of this article is to identify the main approaches and to discuss their perspectives, including their strenghts and weaknesses in, especially, public library contexts. The purpose is also to prsent and discuss the results of a recent - 2014 - Danish library user...... segmentation project using computer-generated clusters. Compared to traditional marketing texts, this article also tries to identify user segments or images or metaphors by the library profession itself....

  3. The BlueGene/L Supercomputer and Quantum ChromoDynamics

    International Nuclear Information System (INIS)

    Vranas, P; Soltz, R

    2006-01-01

    In summary our update contains: (1) Perfect speedup sustaining 19.3% of peak for the Wilson D D-slash Dirac operator. (2) Measurements of the full Conjugate Gradient (CG) inverter that inverts the Dirac operator. The CG inverter contains two global sums over the entire machine. Nevertheless, our measurements retain perfect speedup scaling demonstrating the robustness of our methods. (3) We ran on the largest BG/L system, the LLNL 64 rack BG/L supercomputer, and obtained a sustained speed of 59.1 TFlops. Furthermore, the speedup scaling of the Dirac operator and of the CG inverter are perfect all the way up to the full size of the machine, 131,072 cores (please see Figure II). The local lattice is rather small (4 x 4 x 4 x 16) while the total lattice has been a lattice QCD vision for thermodynamic studies (a total of 128 x 128 x 256 x 32 lattice sites). This speed is about five times larger compared to the speed we quoted in our submission. As we have pointed out in our paper QCD is notoriously sensitive to network and memory latencies, has a relatively high communication to computation ratio which can not be overlapped in BGL in virtual node mode, and as an application is in a class of its own. The above results are thrilling to us and a 30 year long dream for lattice QCD

  4. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  5. A Parallel Supercomputer Implementation of a Biological Inspired Neural Network and its use for Pattern Recognition

    International Nuclear Information System (INIS)

    De Ladurantaye, Vincent; Lavoie, Jean; Bergeron, Jocelyn; Parenteau, Maxime; Lu Huizhong; Pichevar, Ramin; Rouat, Jean

    2012-01-01

    A parallel implementation of a large spiking neural network is proposed and evaluated. The neural network implements the binding by synchrony process using the Oscillatory Dynamic Link Matcher (ODLM). Scalability, speed and performance are compared for 2 implementations: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) running on clusters of multicore supercomputers and NVIDIA graphical processing units respectively. A global spiking list that represents at each instant the state of the neural network is described. This list indexes each neuron that fires during the current simulation time so that the influence of their spikes are simultaneously processed on all computing units. Our implementation shows a good scalability for very large networks. A complex and large spiking neural network has been implemented in parallel with success, thus paving the road towards real-life applications based on networks of spiking neurons. MPI offers a better scalability than CUDA, while the CUDA implementation on a GeForce GTX 285 gives the best cost to performance ratio. When running the neural network on the GTX 285, the processing speed is comparable to the MPI implementation on RQCHP's Mammouth parallel with 64 notes (128 cores).

  6. Modeling radiative transport in ICF plasmas on an IBM SP2 supercomputer

    International Nuclear Information System (INIS)

    Johansen, J.A.; MacFarlane, J.J.; Moses, G.A.

    1995-01-01

    At the University of Wisconsin-Madison the authors have integrated a collisional-radiative-equilibrium model into their CONRAD radiation-hydrodynamics code. This integrated package allows them to accurately simulate the transport processes involved in ICF plasmas; including the important effects of self-absorption of line-radiation. However, as they increase the amount of atomic structure utilized in their transport models, the computational demands increase nonlinearly. In an attempt to meet this increased computational demand, they have recently embarked on a mission to parallelize the CONRAD program. The parallel CONRAD development is being performed on an IBM SP2 supercomputer. The parallelism is based on a message passing paradigm, and is being implemented using PVM. At the present time they have determined that approximately 70% of the sequential program can be executed in parallel. Accordingly, they expect that the parallel version will yield a speedup on the order of three times that of the sequential version. This translates into only 10 hours of execution time for the parallel version, whereas the sequential version required 30 hours

  7. Portable implementation model for CFD simulations. Application to hybrid CPU/GPU supercomputers

    Science.gov (United States)

    Oyarzun, Guillermo; Borrell, Ricard; Gorobets, Andrey; Oliva, Assensi

    2017-10-01

    Nowadays, high performance computing (HPC) systems experience a disruptive moment with a variety of novel architectures and frameworks, without any clarity of which one is going to prevail. In this context, the portability of codes across different architectures is of major importance. This paper presents a portable implementation model based on an algebraic operational approach for direct numerical simulation (DNS) and large eddy simulation (LES) of incompressible turbulent flows using unstructured hybrid meshes. The strategy proposed consists in representing the whole time-integration algorithm using only three basic algebraic operations: sparse matrix-vector product, a linear combination of vectors and dot product. The main idea is based on decomposing the nonlinear operators into a concatenation of two SpMV operations. This provides high modularity and portability. An exhaustive analysis of the proposed implementation for hybrid CPU/GPU supercomputers has been conducted with tests using up to 128 GPUs. The main objective consists in understanding the challenges of implementing CFD codes on new architectures.

  8. Assessment techniques for a learning-centered curriculum: evaluation design for adventures in supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Helland, B. [Ames Lab., IA (United States); Summers, B.G. [Oak Ridge National Lab., TN (United States)

    1996-09-01

    As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. The data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.

  9. Visualization at supercomputing centers: the tale of little big iron and the three skinny guys.

    Science.gov (United States)

    Bethel, E W; van Rosendale, J; Southard, D; Gaither, K; Childs, H; Brugger, E; Ahern, S

    2011-01-01

    Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources-the "Big Iron." Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be-that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?

  10. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan; Mills, Richard T.

    2012-04-18

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors per realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.

  11. Performance of an opportunistic multi-user cognitive network with multiple primary users

    KAUST Repository

    Khan, Fahd Ahmed; Tourki, Kamel; Alouini, Mohamed-Slim; Qaraqe, Khalid A.

    2014-01-01

    for transmission. This opportunistic selection depends on the transmission channel power gain and the interference channel power gain as well as the power allocation policy adopted at the users. Exact closed form expressions for the momentgenerating function

  12. User Environment Tracking and Problem Detection with XALT

    Energy Technology Data Exchange (ETDEWEB)

    Agrawal, Kapil [ORNL; Fahey, Mark R [ORNL; McLay, Robert [Texas Advanced Computing Center; James, Doug [Texas Advanced Computing Center

    2014-01-01

    This work improves our understanding of individual users software needs, then leverages that understanding to help stakeholders conduct business in a more efficient, effective, and systematic way. The product, XALT, builds on work that is already improving the user experience and enhancing support programs for thousands of users on twelve supercomputers across the United States and Europe. XALT will instrument individual jobs on high-end computers to generate a picture of the compilers, libraries, and other software that users need to run their jobs successfully. It will highlight the products our researchers need and do not need, and alert users and support staff to the root causes of software configuration issues as soon as the problems occur. A key objective of this work is generating the information needed to improve efficiency and effectiveness for an extensive community of stakeholders including users, sponsoring institutions, support organizations, and development teams. Efficiency, effectiveness, and responsible stewardship each require a clear picture of users needs. XALT is an important step in the quest to achieve that clarity.

  13. User 2020

    DEFF Research Database (Denmark)

    Porras, Jari; Heikkinen, Kari; Kinnula, Marianne

    2014-01-01

    an effect on their future needs. Human needs have been studied much longer than user generations per se. Psychologist Maslow presented a characterization of human needs as early as 1943. This basic characterization was later studied with an evolving environment in mind. Although the basic needs have...

  14. Wien Automatic System Package (WASP). A computer code for power generating system expansion planning. Version WASP-III Plus. User's manual. Volume 2: Appendices

    International Nuclear Information System (INIS)

    1995-01-01

    With several Member States, the IAEA has completed a new version of the WASP program, which has been called WASP-Ill Plus since it follows quite closely the methodology of the WASP-Ill model. The major enhancements in WASP-Ill Plus with respect to the WASP-Ill version are: increase in the number of thermal fuel types (from 5 to 10); verification of which configurations generated by CONGEN have already been simulated in previous iterations with MERSIM; direct calculation of combined Loading Order of FIXSYS and VARSYS plants; simulation of system operation includes consideration of physical constraints imposed on some fuel types (i.e., fuel availability for electricity generation); extended output of the resimulation of the optimal solution; generation of a file that can be used for graphical representation of the results of the resimulation of the optimal solution and cash flows of the investment costs; calculation of cash flows allows to include the capital costs of plants firmly committed or in construction (FIXSYS plants); user control of the distribution of capital cost expenditures during the construction period (if required to be different from the general 'S' curve distribution used as default). This second volume of the document to support use of the WASP-Ill Plus computer code consists of 5 appendices giving some additional information about the WASP-Ill Plus program. Appendix A is mainly addressed to the WASP-Ill Plus system analyst and supplies some information which could help in the implementation of the program on the user computer facilities. This appendix also includes some aspects about WASP-Ill Plus that could not be treated in detail in Chapters 1 to 11. Appendix B identifies all error and warning messages that may appear in the WASP printouts and advises the user how to overcome the problem. Appendix C presents the flow charts of the programs along with a brief description of the objectives and structure of each module. Appendix D describes the

  15. Simulation of x-rays in refractive structure by the Monte Carlo method using the supercomputer SKIF

    International Nuclear Information System (INIS)

    Yaskevich, Yu.R.; Kravchenko, O.I.; Soroka, I.I.; Chembrovskij, A.G.; Kolesnik, A.S.; Serikova, N.V.; Petrov, P.V.; Kol'chevskij, N.N.

    2013-01-01

    Software 'Xray-SKIF' for the simulation of the X-rays in refractive structures by the Monte-Carlo method using the supercomputer SKIF BSU are developed. The program generates a large number of rays propagated from a source to the refractive structure. The ray trajectory under assumption of geometrical optics is calculated. Absorption is calculated for each ray inside of refractive structure. Dynamic arrays are used for results of calculation rays parameters, its restore the X-ray field distributions very fast at different position of detector. It was found that increasing the number of processors leads to proportional decreasing of calculation time: simulation of 10 8 X-rays using supercomputer with the number of processors from 1 to 30 run-times equal 3 hours and 6 minutes, respectively. 10 9 X-rays are calculated by software 'Xray-SKIF' which allows to reconstruct the X-ray field after refractive structure with a special resolution of 1 micron. (authors)

  16. A visualization environment for supercomputing-based applications in computational mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Pavlakos, C.J.; Schoof, L.A.; Mareda, J.F.

    1993-06-01

    In this paper, we characterize a visualization environment that has been designed and prototyped for a large community of scientists and engineers, with an emphasis in superconducting-based computational mechanics. The proposed environment makes use of a visualization server concept to provide effective, interactive visualization to the user`s desktop. Benefits of using the visualization server approach are discussed. Some thoughts regarding desirable features for visualization server hardware architectures are also addressed. A brief discussion of the software environment is included. The paper concludes by summarizing certain observations which we have made regarding the implementation of such visualization environments.

  17. Performance of an opportunistic multi-user cognitive network with multiple primary users

    KAUST Repository

    Khan, Fahd Ahmed

    2014-04-01

    Consider a multi-user underlay cognitive network where multiple cognitive users, having limited peak transmit power, concurrently share the spectrum with a primary network with multiple users. The channel between the secondary network is assumed to have independent but not identical Nakagami-m fading. The interference channel between the secondary users and the primary users is assumed to have Rayleigh fading. The uplink scenario is considered where a single secondary user is selected for transmission. This opportunistic selection depends on the transmission channel power gain and the interference channel power gain as well as the power allocation policy adopted at the users. Exact closed form expressions for the momentgenerating function, outage performance and the symbol-error-rate performance are derived. The outage performance is also studied in the asymptotic regimes and the generalized diversity gain of this scheduling scheme is derived. Numerical results corroborate the derived analytical results.

  18. Coherent 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an Optimal Supercomputer Optical Switch Fabric

    DEFF Research Database (Denmark)

    Karinou, Fotini; Borkowski, Robert; Zibar, Darko

    2013-01-01

    We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates.......We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates....

  19. Powering physics data transfers with FDT

    International Nuclear Information System (INIS)

    Maxa, Zdenek; Kcira, Dorian; Legrand, Iosif; Mughal, Azher; Thomas, Michael; Voicu, Ramiro; Ahmed, Badar

    2011-01-01

    We present a data transfer system for the grid environment built on top of the open source FDT tool (Fast Data Transfer) developed by Caltech in collaboration with the National University of Science and Technology (Pakistan). The enhancement layer above FDT consists of a client program - fdtcp (FDT copy) and a fdtd service (FDT daemon). This pair of components allows for GSI authenticated data transfers and offers to the user (or data movement production service) interface analogous to grid middle-ware data transfer services - SRM (i.e. srmcp) or GridFTP (i.e. globus-url-copy). fdtcp/fdtd enables third-party, batched file transfers. An important aspect is monitoring by means of the MonALISA active monitoring light-weight library ApMon, providing real-time monitoring and arrival time estimates as well as powerful troubleshooting mechanism. The actual transfer is carried out by the FDT application, an efficient application capable of reading and writing at disk speed over wide area networks. FDT's excellent performance was demonstrated e.g. during SuperComputing 2009 Bandwidth Challenge. We also discuss the storage technology interface layer, specifically focusing on the open source Hadoop distributed file system (HDFS), presenting the recently developed FDT-HDFS sequential write adapter. The integration with CMS PhEDEx is described as well. The PhEDEx project (Physics Experiment Data Export) is responsible for facilitating large-scale CMS data transfers across the grid. Ongoing and future development involves interfacing with next generation network services developed by OGF NSI-WG, GLIF and DICE groups, allowing for network resource reservation and scheduling.

  20. Charliecloud: Unprivileged containers for user-defined software stacks in HPC

    Energy Technology Data Exchange (ETDEWEB)

    Priedhorsky, Reid [Los Alamos National Laboratory; Randles, Timothy C. [Los Alamos National Laboratory

    2016-08-09

    Supercomputing centers are seeing increasing demand for user-defined software stacks (UDSS), instead of or in addition to the stack provided by the center. These UDSS support user needs such as complex dependencies or build requirements, externally required configurations, portability, and consistency. The challenge for centers is to provide these services in a usable manner while minimizing the risks: security, support burden, missing functionality, and performance. We present Charliecloud, which uses the Linux user and mount namespaces to run industry-standard Docker containers with no privileged operations or daemons on center resources. Our simple approach avoids most security risks while maintaining access to the performance and functionality already on offer, doing so in less than 500 lines of code. Charliecloud promises to bring an industry-standard UDSS user workflow to existing, minimally altered HPC resources.

  1. Long-term research plan for human factors affecting safeguards at nuclear power plants. Volume 1. Summary and users' guide. Vol. 1

    International Nuclear Information System (INIS)

    O'Brien, J.N.; Fainberg, A.

    1984-04-01

    This report presents a long-term research plan for addressing human factors which can adversely affect safeguards at nuclear power plants. It was developed in order to prioritize and propose research for NRC in regulating power plant safeguards. Research efforts addressing human factors in safeguards were developed and prioritized according to the importance of human factors areas. Research was also grouped to take advantage of common research approaches and data sources where appropriate. Four main program elements emerged from the analysis, namely (1) Training and Performance Evaluation, (2) Organizational Factors, (3) Man-Machine Interface, and (4) Trustworthiness and Reliability. Within each program element, projects are proposed with results and information flowing between program elements where useful. An overall research plan was developed for a 4-year period and it would lead ultimately to regulatory activities including rulemaking, regulatory guides, and technical bases for regulatory action. The entire plan is summarized in Volume 1 of this report

  2. Power Electronics

    DEFF Research Database (Denmark)

    Iov, Florin; Ciobotaru, Mihai; Blaabjerg, Frede

    2008-01-01

    is to change the electrical power production sources from the conventional, fossil (and short term) based energy sources to renewable energy resources. The other is to use high efficient power electronics in power generation, power transmission/distribution and end-user application. This paper discuss the most...... emerging renewable energy sources, wind energy, which by means of power electronics are changing from being a minor energy source to be acting as an important power source in the energy system. Power electronics is the enabling technology and the presentation will cover the development in wind turbine...... technology from kW to MW, discuss which power electronic solutions are most feasible and used today....

  3. Homogeneous grouping of residential users of electric power in accordance with the variables that affect the consumption; Agrupamientos homogeneos de usuarios residenciales de energia electrica en funcion de las variables que impactan el consumo

    Energy Technology Data Exchange (ETDEWEB)

    Campero Littlewood, E.; Romero Cortes, J. [Departamento de Energia, Universidad Autonoma Metropolitana - Unidad Azcapotzalco, Mexico, D. F. (Mexico)

    1997-12-31

    This paper presents the results of the correlation analysis of the monthly consumption of electric power and the capacities in watts of the electric household appliances and domestic lighting performed in a sample of users of the residential tariff. To carry out this task, the information obtained in the answers to the inquiry applied to a group of dwellings, was used (the results of the inquiry are presented in another paper of this Seminar). The correlation variables were obtained from the nominal capacities or through the actual measurements of the energy consumption of the electric household appliances similar to the ones found in the visited homes. At the end of this paper the result of the application of the cluster analysis technic to obtain homogeneous groups of users, is presented, so as to be in position of estimating the shape of the hourly demand curve by means of the recording of the demand (watts) of a small sample of users. [Espanol] En este articulo se presenta el resultado de correlacion del consumo mensual de energia electrica y las capacidades en watts de los electrodomesticos e iluminacion realizado a una muestra de usuarios de tarifa residencial. Para este trabajo se utilizo la informacion obtenida en las respuestas de una encuesta aplicada en un conjunto habitacional (los resultados de la encuesta se presentan en otro articulo en este Seminario). Las variables de la correlacion se obtuvieron a partir de las especificaciones de capacidades o a traves de mediciones en electrodomesticos similares a los encontrados en los hogares. Al final se presenta el resultado de la aplicacion de la tecnica de formacion de agrupamientos `Cluster Analysis` para obtener grupos homogeneos de usuarios, de forma que se pueda estimar el perfil de demanda electrica mediante el registro de la demanda (watts) de una pequena muestra de usuarios.

  4. Homogeneous grouping of residential users of electric power in accordance with the variables that affect the consumption; Agrupamientos homogeneos de usuarios residenciales de energia electrica en funcion de las variables que impactan el consumo

    Energy Technology Data Exchange (ETDEWEB)

    Campero Littlewood, E; Romero Cortes, J [Departamento de Energia, Universidad Autonoma Metropolitana - Unidad Azcapotzalco, Mexico, D. F. (Mexico)

    1998-12-31

    This paper presents the results of the correlation analysis of the monthly consumption of electric power and the capacities in watts of the electric household appliances and domestic lighting performed in a sample of users of the residential tariff. To carry out this task, the information obtained in the answers to the inquiry applied to a group of dwellings, was used (the results of the inquiry are presented in another paper of this Seminar). The correlation variables were obtained from the nominal capacities or through the actual measurements of the energy consumption of the electric household appliances similar to the ones found in the visited homes. At the end of this paper the result of the application of the cluster analysis technic to obtain homogeneous groups of users, is presented, so as to be in position of estimating the shape of the hourly demand curve by means of the recording of the demand (watts) of a small sample of users. [Espanol] En este articulo se presenta el resultado de correlacion del consumo mensual de energia electrica y las capacidades en watts de los electrodomesticos e iluminacion realizado a una muestra de usuarios de tarifa residencial. Para este trabajo se utilizo la informacion obtenida en las respuestas de una encuesta aplicada en un conjunto habitacional (los resultados de la encuesta se presentan en otro articulo en este Seminario). Las variables de la correlacion se obtuvieron a partir de las especificaciones de capacidades o a traves de mediciones en electrodomesticos similares a los encontrados en los hogares. Al final se presenta el resultado de la aplicacion de la tecnica de formacion de agrupamientos `Cluster Analysis` para obtener grupos homogeneos de usuarios, de forma que se pueda estimar el perfil de demanda electrica mediante el registro de la demanda (watts) de una pequena muestra de usuarios.

  5. Car2x with software defined networks, network functions virtualization and supercomputers technical and scientific preparations for the Amsterdam Arena telecoms fieldlab

    NARCIS (Netherlands)

    Meijer R.J.; Cushing R.; De Laat C.; Jackson P.; Klous S.; Koning R.; Makkes M.X.; Meerwijk A.

    2015-01-01

    In the invited talk 'Car2x with SDN, NFV and supercomputers' we report about how our past work with SDN [1, 2] allows the design of a smart mobility fieldlab in the huge parking lot the Amsterdam Arena. We explain how we can engineer and test software that handle the complex conditions of the Car2X

  6. XTV users guide

    International Nuclear Information System (INIS)

    Dearing, J.F.; Johns, R.C.

    1996-09-01

    XTV is an X-Windows based Graphical User Interface for viewing results of Transient Reactor Analysis Code (TRAC) calculations. It provides static and animated color mapped visualizations of both thermal-hydraulic and heat conduction components in a TRAC model of a nuclear power plant, as well as both on-screen and hard copy two-dimensional plot capabilities. XTV is the successor to TRAP, the former TRAC postprocessor using the proprietary DISSPLA graphics library. This manual describes Version 2.0, which requires TRAC version 5.4.20 or later for full visualization capabilities

  7. A criticality safety analysis code using a vectorized Monte Carlo method on the HITAC S-810 supercomputer

    International Nuclear Information System (INIS)

    Morimoto, Y.; Maruyama, H.

    1987-01-01

    A vectorized Monte Carlo criticality safety analysis code has been developed on the vector supercomputer HITAC S-810. In this code, a multi-particle tracking algorithm was adopted for effective utilization of the vector processor. A flight analysis with pseudo-scattering was developed to reduce the computational time needed for flight analysis, which represents the bulk of computational time. This new algorithm realized a speed-up of factor 1.5 over the conventional flight analysis. The code also adopted the multigroup cross section constants library of the Bodarenko type with 190 groups, with 132 groups being for fast and epithermal regions and 58 groups being for the thermal region. Evaluation work showed that this code reproduce the experimental results to an accuracy of about 1 % for the effective neutron multiplication factor. (author)

  8. Using the LANSCE irradiation facility to predict the number of fatal soft errors in one of the world's fastest supercomputers

    International Nuclear Information System (INIS)

    Michalak, S.E.; Harris, K.W.; Hengartner, N.W.; Takala, B.E.; Wender, S.A.

    2005-01-01

    Los Alamos National Laboratory (LANL) is home to the Los Alamos Neutron Science Center (LANSCE). LANSCE is a unique facility because its neutron spectrum closely mimics the neutron spectrum at terrestrial and aircraft altitudes, but is many times more intense. Thus, LANSCE provides an ideal setting for accelerated testing of semiconductor and other devices that are susceptible to cosmic ray induced neutrons. Many industrial companies use LANSCE to estimate device susceptibility to cosmic ray induced neutrons, and it has also been used to test parts from one of LANL's supercomputers, the ASC (Advanced Simulation and Computing Program) Q. This paper discusses our use of the LANSCE facility to study components in Q including a comparison with failure data from Q

  9. Evaluating the networking characteristics of the Cray XC-40 Intel Knights Landing-based Cori supercomputer at NERSC

    Energy Technology Data Exchange (ETDEWEB)

    Doerfler, Douglas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Austin, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cook, Brandon [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kandalla, Krishna [Cray Inc, Bloomington, MN (United States); Mendygral, Peter [Cray Inc, Bloomington, MN (United States)

    2017-09-12

    There are many potential issues associated with deploying the Intel Xeon Phi™ (code named Knights Landing [KNL]) manycore processor in a large-scale supercomputer. One in particular is the ability to fully utilize the high-speed communications network, given that the serial performance of a Xeon Phi TM core is a fraction of a Xeon®core. In this paper, we take a look at the trade-offs associated with allocating enough cores to fully utilize the Aries high-speed network versus cores dedicated to computation, e.g., the trade-off between MPI and OpenMP. In addition, we evaluate new features of Cray MPI in support of KNL, such as internode optimizations. We also evaluate one-sided programming models such as Unified Parallel C. We quantify the impact of the above trade-offs and features using a suite of National Energy Research Scientific Computing Center applications.

  10. The Erasmus Computing Grid - Building a Super-Computer for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2007-01-01

    textabstractToday advances in scientific research as well as clinical diagnostics and treatment are inevitably connected with information solutions concerning computation power and information storage. The needs for information technology are enormous and are in many cases the limiting

  11. Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop novel FPGA-based algorithmic technology that will enable unprecedented computational power for the solution of large sparse linear equation...

  12. User's guide for MIRVAL: a computer code for comparing designs of heliostat-receiver optics for central receiver solar power plants

    Energy Technology Data Exchange (ETDEWEB)

    Leary, P L; Hankins, J D

    1979-02-01

    MIRVAL is a Monte Carlo program which simulates the heliostats and a portion of the receiver for solar energy central receiver power plants. Models for three receiver types and four kinds of heliostats are included in the code. The three receiver types modeled are an external cylinder, a cylindrical cavity with a downward-facing aperature, and a north-facing cavity. Three heliostats which track in elevation and azimuth are modeled, one of which is enclosed in a plastic dome. The fourth type consists of a rack of louvered reflective panels with the rack rotatable about a fixed horizontal axis. Phenomena whose effects are simulated are shadowing, blocking, mirror tracking, random errors in tracking and in the conformation of the reflective surface, optical figure of the reflective surface, insolation, angular distribution of incoming sun rays to account for limb darkening and scattering, attenuation of light between the mirrors and the receiver, reflectivity of the mirror surface, and mirror aiming strategy.

  13. Measuring the User Experience

    Directory of Open Access Journals (Sweden)

    Harry B. Santoso

    2016-01-01

    Full Text Available The aim of the current study is to develop an adapted version of User Experience Questionnaire (UEQ and evaluate a learning management system. Although there is a growing interest on User Experience, there are still limited resources (i.e. measurement tools or questionnaires available to measure user experience of any products, especially learning management systems. Two hundreds and thirteen computer science students participated and completed the adapted version of UEQ. In the study, the researchers used a learning management system named Student Centered e-Learning Environment (SCELE. Several types of learning materials are posted in SCELE such as audio files, simulations, PowerPoint slides, multimedia contents, and webpage links. Most of the lecturers use discussion forums in their courses to encourage students to participate in active learning setting. Staff and lecturers sometimes post academic-related announcements on the SCELE homepage. Two hundred thirteen students enrolled in Computer Science program were invited to evaluate the SCELE. This study will benefit UX practitioners, HCI educators, program and center of learning resources administrators, and learning management system developers. Findings of the present study may also be valuable for universities and high schools which are using computer-based learning environments.

  14. Plasma physics on the TI-85 calculator or Down with supercomputers

    International Nuclear Information System (INIS)

    Sedlacek, Z.

    1998-10-01

    In the Fourier transformed velocity space the Vlasov plasma oscillations may be interpreted as a wave propagation process corresponding to an imperfectly trapped (leaking) wave. The Landau damped solutions of the Vlasov-Poisson equation then become genuine Eigenmodes corresponding to complex eigenvalues. To illustrate this new interpretation we solve numerically the Fourier transformed Vlasov-Poisson equation, essentially a perturbed advective equation, on the TI-85 pocket graphics calculator. A program is described, based on the method of lines: A finite-difference scheme is utilized to discretize the transformed equation and the resulting set of ordinary differential equations is then solved in time. The user can choose from several possible finite-difference differentiation schemes differing in the total number of points and the number of downwind points. The resulting evolution of the electric field showing the Landau damped plasma oscillations is displayed on the screen of the calculator. In addition, calculation of the eigenvalues of the Fourier transformed Vlasov-Poisson operator is possible. The user can also experiment with the numerical solution of the advective equation which describes free streaming. (author)

  15. Photovoltaics information user study

    Energy Technology Data Exchange (ETDEWEB)

    Belew, W.W.; Wood, B.L.; Marie, T.L.; Reinhardt, C.L.

    1980-10-01

    The results of a series of telephone interviews with groups of users of information on photovoltaics (PV) are described. These results, part of a larger study on many different solar technologies, identify types of information each group needed and the best ways to get information to each group. The report is 1 of 10 discussing study results. The overall study provides baseline data about information needs in the solar community. It covers these technological areas: photovoltaics, passive solar heating and cooling, active solar heating and cooling, biomass energy, solar thermal electric power, solar industrial and agricultural process heat, wind energy, ocean energy, and advanced energy storage. An earlier study identified the information user groups in the solar community and the priority (to accelerate solar energy commercialization) of getting information to each group. In the current study only high-priority groups were examined. Results from seven PV groups respondents are analyzed in this report: DOE-Funded Researchers, Non-DOE-Funded Researchers, Researchers Working for Manufacturers, Representatives of Other Manufacturers, Representatives of Utilities, Electric Power Engineers, and Educators.

  16. Accelerating Science with the NERSC Burst Buffer Early User Program

    Energy Technology Data Exchange (ETDEWEB)

    Bhimji, Wahid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bard, Debbie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Romanus, Melissa [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Rutgers Univ., New Brunswick, NJ (United States); Paul, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ovsyannikov, Andrey [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Friesen, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bryson, Matt [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Correa, Joaquin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lockwood, Glenn K. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tsulaia, Vakho [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Farrell, Steve [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Gursoy, Doga [Argonne National Lab. (ANL), Argonne, IL (United States). Advanced Photon Source (APS); Daley, Chris [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Beckner, Vince [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Van Straalen, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Trebotich, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tull, Craig [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wright, Nicholas J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Antypas, Katie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Prabhat, none [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-01-01

    NVRAM-based Burst Buffers are an important part of the emerging HPC storage landscape. The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory recently installed one of the first Burst Buffer systems as part of its new Cori supercomputer, collaborating with Cray on the development of the DataWarp software. NERSC has a diverse user base comprised of over 6500 users in 700 different projects spanning a wide variety of scientific computing applications. The use-cases of the Burst Buffer at NERSC are therefore also considerable and diverse. We describe here performance measurements and lessons learned from the Burst Buffer Early User Program at NERSC, which selected a number of research projects to gain early access to the Burst Buffer and exercise its capability to enable new scientific advancements. To the best of our knowledge this is the first time a Burst Buffer has been stressed at scale by diverse, real user workloads and therefore these lessons will be of considerable benefit to shaping the developing use of Burst Buffers at HPC centers.

  17. The Erasmus Computing Grid – Building a Super-Computer for Free

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); A. Abuseiris (Anis); R.M. de Graaf (Rob); M. Lesnussa (Michael); F.G. Grosveld (Frank)

    2011-01-01

    textabstractToday advances in scientific research as well as clinical diagnostics and treatment are inevitably connected with information solutions concerning computation power and information storage. The needs for information technology are enormous and are in many cases the limiting factor for

  18. Lisbon: Supercomputer for Portugal financed from 'CERN Fund'

    International Nuclear Information System (INIS)

    Anon.

    1990-01-01

    A powerful new computer is now in use at the Portuguese National Foundation for Scientific Computation (FCCN Lisbon), set up in 1987 to help fund university computing, to anticipate future requirements and to provide a fast computer at the National Civil Engineering Laboratory (LNEC) as a central node for remote access by major research institutes

  19. Mentoring the Next Generation of Science Gateway Developers and Users

    Science.gov (United States)

    Hayden, L. B.; Jackson-Ward, F.

    2016-12-01

    The Science Gateway Institute (SGW-I) for the Democratization and Acceleration of Science was a SI2-SSE Collaborative Research conceptualization award funded by NSF in 2012. From 2012 through 2015, we engaged interested members of the science and engineering community in a planning process for a Science Gateway Community Institute (SGCI). Science Gateways provide Web interfaces to some of the most sophisticated cyberinfrastructure resources. They interact with remotely executing science applications on supercomputers, they connect to remote scientific data collections, instruments and sensor streams, and support large collaborations. Gateways allow scientists to concentrate on the most challenging science problems while underlying components such as computing architectures and interfaces to data collection changes. The goal of our institute was to provide coordinating activities across the National Science Foundation, eventually providing services more broadly to projects funded by other agencies. SGW-I has succeeded in identifying two underrepresented communities of future gateway designers and users. The Association of Computer and Information Science/Engineering Departments at Minority Institutions (ADMI) was identified as a source of future gateway designers. The National Organization for the Professional Advancement of Black Chemists and Chemical Engineers (NOBCChE) was identified as a community of future science gateway users. SGW-I efforts to engage NOBCChE and ADMI faculty and students in SGW-I are now woven into the workforce development component of SGCI. SGCI (ScienceGateways.org ) is a collaboration of six universities, led by San Diego Supercomputer Center. The workforce development component is led by Elizabeth City State University (ECSU). ECSU efforts focus is on: Produce a model of engagement; Integration of research into education; and Mentoring of students while aggressively addressing diversity. This paper documents the outcome of the SGW

  20. User Requirements for Wireless

    DEFF Research Database (Denmark)

    in the elicitation process. Cases and user requirement elements discussed in the book include: User requirements elicitation processes for children, construction workers, and farmers User requirements for personalized services of a broadcast company Variations in user involvement Practical elements of user...

  1. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry; Etienne, Vincent; Gashawbeza, Ewenet; Curiel, Emesto Sandoval; Khan, Azizur; Feki, Saber; Kortas, Samuel

    2017-01-01

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey's acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less than

  2. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry

    2017-02-27

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey\\'s acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less

  3. Wind Power - A Power Source Enabled by Power Electronics

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Chen, Zhe

    2004-01-01

    . The deregulation of energy has lowered the investment in bigger power plants, which means the need for new electrical power sources may be very high in the near future. Two major technologies will play important roles to solve the future problems. One is to change the electrical power production sources from......The global electrical energy consumption is still rising and there is a steady demand to increase the power capacity. The production, distribution and the use of the energy should be as technological efficient as possible and incentives to save energy at the end-user should be set up...... the conventional, fossil (and short term) based energy sources to renewable energy sources. The other is to use high efficient power electronics in power systems, power production and end-user application. This paper discuss the most emerging renewable energy source, wind energy, which by means of power...

  4. Improving the Drupal User Experience

    Directory of Open Access Journals (Sweden)

    Rachel Vacek

    2010-12-01

    Full Text Available Drupal is a powerful, but complex, Web Content Management System, being adopted by many libraries. Installing Drupal typically involves adding additional modules for flexibility and increased functionality. Although installing additional modules does increase functionality, it inevitably complicates usability. At the University of Houston Libraries, the Web Services department researched what modules work well together to accomplish a simpler interface while simultaneously providing the flexibility and advanced tools needed to create a successful user experience within Drupal. This article explains why particular modules were chosen or developed, how the design enhanced the user experience, how the CMS architecture was created, and how other library systems were integrated into Drupal.

  5. Performance analysis of an opportunistic multi-user cognitive network with multiple primary users

    KAUST Repository

    Khan, Fahd Ahmed

    2014-03-01

    Consider a multi-user underlay cognitive network where multiple cognitive users concurrently share the spectrum with a primary network with multiple users. The channel between the secondary network is assumed to have independent but not identical Nakagami-m fading. The interference channel between the secondary users (SUs) and the primary users is assumed to have Rayleigh fading. A power allocation based on the instantaneous channel state information is derived when a peak interference power constraint is imposed on the secondary network in addition to the limited peak transmit power of each SU. The uplink scenario is considered where a single SU is selected for transmission. This opportunistic selection depends on the transmission channel power gain and the interference channel power gain as well as the power allocation policy adopted at the users. Exact closed form expressions for the moment-generating function, outage performance, symbol error rate performance, and the ergodic capacity are derived. Numerical results corroborate the derived analytical results. The performance is also studied in the asymptotic regimes, and the generalized diversity gain of this scheduling scheme is derived. It is shown that when the interference channel is deeply faded and the peak transmit power constraint is relaxed, the scheduling scheme achieves full diversity and that increasing the number of primary users does not impact the diversity order. © 2014 John Wiley & Sons, Ltd.

  6. SOFTWARE FOR SUPERCOMPUTER SKIF “ProLit-lC” and “ProNRS-lC” FOR FOUNDRY AND METALLURGICAL PRODUCTIONS

    Directory of Open Access Journals (Sweden)

    A. N. Chichko

    2008-01-01

    Full Text Available The data of modeling on supercomputer system SKIF of technological process of  molds filling by means of computer system 'ProLIT-lc', and also data of modeling of the steel pouring process by means ofTroNRS-lc'are presented. The influence of number of  processors of  multinuclear computer system SKIF on acceleration and time of  modeling of technological processes, connected with production of castings and slugs, is shown.

  7. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  8. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-03-29

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  9. An Evaluation of Molecular Dynamics Performance on the Hybrid Cray XK6 Supercomputer

    International Nuclear Information System (INIS)

    Brown, W. Michael; Nguyen, Trung D.; Fuentes-Cabrera, Miguel A.; Fowlkes, Jason Davidson; Rack, Philip D.; Berger, Mark

    2012-01-01

    For many years, the drive towards computational physics studies that match the size and time-scales of experiment has been fueled by increases in processor and interconnect performance that could be exploited with relatively little modification to existing codes. Engineering and electrical power constraints have disrupted this trend, requiring more drastic changes to both hardware and software solutions. Here, we present details of the Cray XK6 architecture that achieves increased performance with the use of GPU accelerators. We review software development efforts in the LAMMPS molecular dynamics package that have been implemented in order to utilize hybrid high performance computers. We present benchmark results for solid-state, biological, and mesoscopic systems and discuss some challenges for utilizing hybrid systems. We present some early work in improving application performance on the XK6 and performance results for the simulation of liquid copper nanostructures with the embedded atom method.

  10. Connecting the dots, or nuclear data in the age of supercomputing

    International Nuclear Information System (INIS)

    Bauge, E.; Dupuis, M.; Hilaire, S.; Peru, S.; Koning, A.J.; Rochman, D.; Goriely, S.

    2014-01-01

    Recent increases in computing power have allowed for much progress to be made in the field of nuclear data. The advances listed below are each significant, but together bring the potential to completely change our perspective on the nuclear data evaluation process. The use of modern nuclear modeling codes like TALYS and the Monte Carlo sampling of its model parameter space, together with a code system developed at NRG Petten, which automates the production of ENDF-6 formatted files, their processing, and their use in nuclear reactor calculations, constitutes the Total Monte Carlo approach, which directly links physical model parameters with calculated integral observables like k_e_f_f. Together with the Backward-Forward Monte Carlo method for weighting samples according their statistical likelihood, the Total Monte Carlo can be applied to complete isotopic chains in a consistent way, to simultaneously evaluate nuclear data and the associated uncertainties in the continuum region. Another improvement is found in the uses of microscopic models for nuclear reaction calculations. For example, making use of QRPA excited states calculated with the Gogny interaction to solve the long standing question of the origin of the ad hoc 'pseudo-states' that are introduced in evaluated nuclear data files to account for the Livermore pulsed sphere experiments. A third advance consists of the recent optimization of the Gogny D1M effective nuclear interaction, including constraints from experimental nuclear masses at the 'beyond the mean field' level. All these advances are only made possible by the availability of vast resources of computing power, and even greater resources will allow connecting them, going continuously from the parameters of the nuclear interaction to reactor calculations. However, such scheme will surely only be usable for applications if a few fine-tuning 'knobs' are introduced in it. The values of these adjusted parameters will have to be calibrated versus

  11. Programmable lithography engine (ProLE) grid-type supercomputer and its applications

    Science.gov (United States)

    Petersen, John S.; Maslow, Mark J.; Gerold, David J.; Greenway, Robert T.

    2003-06-01

    There are many variables that can affect lithographic dependent device yield. Because of this, it is not enough to make optical proximity corrections (OPC) based on the mask type, wavelength, lens, illumination-type and coherence. Resist chemistry and physics along with substrate, exposure, and all post-exposure processing must be considered too. Only a holistic approach to finding imaging solutions will accelerate yield and maximize performance. Since experiments are too costly in both time and money, accomplishing this takes massive amounts of accurate simulation capability. Our solution is to create a workbench that has a set of advanced user applications that utilize best-in-class simulator engines for solving litho-related DFM problems using distributive computing. Our product, ProLE (Programmable Lithography Engine), is an integrated system that combines Petersen Advanced Lithography Inc."s (PAL"s) proprietary applications and cluster management software wrapped around commercial software engines, along with optional commercial hardware and software. It uses the most rigorous lithography simulation engines to solve deep sub-wavelength imaging problems accurately and at speeds that are several orders of magnitude faster than current methods. Specifically, ProLE uses full vector thin-mask aerial image models or when needed, full across source 3D electromagnetic field simulation to make accurate aerial image predictions along with calibrated resist models;. The ProLE workstation from Petersen Advanced Lithography, Inc., is the first commercial product that makes it possible to do these intensive calculations at a fraction of a time previously available thus significantly reducing time to market for advance technology devices. In this work, ProLE is introduced, through model comparison to show why vector imaging and rigorous resist models work better than other less rigorous models, then some applications of that use our distributive computing solution are shown

  12. Wilmar Planning Tool, user guide

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, Helge V.

    2006-01-15

    This is a short user guide to the Wilmar Planning Tool developed in the project Wind Power Integration in Liberalised Electricity Markets (WILMAR) supported by EU (Contract No. ENK5-CT-2002-00663). A User Shell implemented in an Excel workbook controls the Wilmar Planning Tool. All data are contained in Access databases that communicate with various sub-models through text files that are exported from or imported to the databases. In the User Shell various scenario variables and control parameters are set, and export of model data from the input database, activation of the models, as well as import of model results to the output database are triggered from the shell. (au)

  13. Wilmar Planning Tool, user guide

    International Nuclear Information System (INIS)

    Larsen, Helge V.

    2006-01-01

    This is a short user guide to the Wilmar Planning Tool developed in the project Wind Power Integration in Liberalised Electricity Markets (WILMAR) supported by EU (Contract No. ENK5-CT-2002-00663). A User Shell implemented in an Excel workbook controls the Wilmar Planning Tool. All data are contained in Access databases that communicate with various sub-models through text files that are exported from or imported to the databases. In the User Shell various scenario variables and control parameters are set, and export of model data from the input database, activation of the models, as well as import of model results to the output database are triggered from the shell. (au)

  14. Wind power - a power source now enabled by power electronics

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Iov, Florin

    2007-01-01

    energy at the end-user should be set up. Deregulation of energy has lowered the investment in larger power plants, which means the need for new electrical power sources may be increased in the near future. Two major technologies will play important roles to solve the future problems. One is to change......The global electrical energy consumption is still rising and there is a steady demand to increase the power capacity. It is expected that it has to be doubled within 20 years. The production, distribution and use of the energy should be as technological efficient as possible and incentives to save...... the electrical power production sources from the conventional, fossil (and short term) based energy sources to renewable energy resources. Another is to use high efficient power electronics in power generation, power transmission/distribution and end-user application. This paper discuss the most emerging...

  15. User Innovation Management

    DEFF Research Database (Denmark)

    Kanstrup, Anne Marie; Bertelsen, Pernille

    User Innovation Management (UIM) is a method for fo-opereation with users in innovation projects. The UIM method emphasizes the practice of a participatorty attitude.......User Innovation Management (UIM) is a method for fo-opereation with users in innovation projects. The UIM method emphasizes the practice of a participatorty attitude....

  16. User Behavior Analytics

    Energy Technology Data Exchange (ETDEWEB)

    Turcotte, Melissa [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Moore, Juston Shane [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-28

    User Behaviour Analytics is the tracking, collecting and assessing of user data and activities. The goal is to detect misuse of user credentials by developing models for the normal behaviour of user credentials within a computer network and detect outliers with respect to their baseline.

  17. Lazy User Behaviour

    OpenAIRE

    Collan, Mikael

    2007-01-01

    In this position paper we suggest that a user will most often choose the solution (device) that will fulfill her (information) needs with the least effort. We call this “lazy user behavior”. We suggest that the principle components responsible for solution selection are the user need and the user state. User need is the user’s detailed (information) need (urgency, type, depth, etc.) and user state is the situation, in which the user is at the moment of the need (location, time, etc.); the use...

  18. Installation of the CDC 7600 supercomputer system in the computer centre in 1972

    CERN Multimedia

    Nettz, William

    1972-01-01

    The CDC 7600 was installed in 1972 in the newly built computer centre. It was said to be the largest and most powerful computer system in Europe at that time and remained the fastest machine at CERN for 9 years. It was replaced after 12 years. Dr. Julian Blake (CERN), Dr. Tor Bloch (CERN), Erwin Gasser (Control Data Corporation), Jean-Marie LaPorte (Control Data Corporation), Peter McWilliam (Control Data Corporation), Hans Oeshlein (Control Data Corporation), and Peter Warn (Control Data Corporation) were heavily involved in this project and may appear on the pictures. William Nettz (who took the pictures) was in charge of the installation. Excerpt from CERN annual report 1972: 'Data handling and evaluation is becoming an increasingly important part of physics experiments. In order to meet these requirements a new central computer system, CDC 7600/6400, has been acquired and it was brought into more or less regular service during the year. Some initial hardware problems have disappeared but work has still to...

  19. WAM-E user's manual

    International Nuclear Information System (INIS)

    Rayes, L.G.; Riley, J.E.

    1986-07-01

    The WAM-E series of mainframe computer codes have been developed to efficiently analyze the large binary models (e.g., fault trees) used to represent the logic relationships within and between the systems of a nuclear power plant or other large, multisystem entity. These codes have found wide application in reliability and safety studies of nuclear power plant systems. There are now nine codes in the WAM-E series, with six (WAMBAM/WAMTAP, WAMCUT, WAMCUT-II, WAMFM, WAMMRG, and SPASM) classified as Type A Production codes and the other three (WAMFTP, WAMTOP, and WAMCONV) classified as Research codes. This document serves as a combined User's Guide, Programmer's Manual, and Theory Reference for the codes, with emphasis on the Production codes. To that end, the manual is divided into four parts: Part I, Introduction; Part II, Theory and Numerics; Part III, WAM-E User's Guide; and Part IV, WAMMRG Programmer's Manual

  20. COSIS User's Manual

    International Nuclear Information System (INIS)

    Cho, J. Y.; Lee, K. B.; Koo, B. S.; Lee, W. K.; Lee, C. C.; Zee, S. Q.

    2006-02-01

    COSIS (COre State Indication System) which implemented in the SMART research reactor plays a role to supply the core state parameters or graphs for the operator to recognize the core state effectively. The followings are the main functions of COSIS. Validity Check for the Process Signals and Determination of the COSIS Inputs (SIGVAL), Coolant Flow Rate Calculation (FLOW), Core Thermal Power Calculation (COREPOW), In-core 3-Dimensional Power Distribution Calculation and Peaking Parameters Generation (POWER3D), Azimuthal Tilt Calculation (AZITILT). This report describes the structures of the I/O files that are essential for the users to run COSIS. COSIS handles the following 4 input files. DATABASE: The base input file, COSIS.INP: The signal input file, CCS.DAT: File required for the in-core detector signal processing and the 3-D power distribution calculation, TPFH2O: Steam table for the water properties The DATABASE file contains the base information for a nuclear power plant and is read at the first COSIS calculation. The COSIS.INP file contains the process input and detector signals, and is read by COSIS at every second. CCS.DAT file, that is produced by the COSISMAS code, contains the information for the in-core detector signal processing and the 3-D power distribution calculation. TPFH2O file is a steam table and is written in binary format. COSIS produces the following 4 output files. DATABASE.OUT: The output file for the DATABASE input file, COSIS.OUT: The normal output file produced after the COSIS calculation, COSIS.SUM: File for the operator to recognize the core state effectively, MAS S IG: File to run the COSISMAS code The DATABASE.OUT file is produced right after finishing DATABASE processing. The COSIS.OUT file is produced after finishing the input signal processing and the main COSIS calculation. The COSIS.SUM file is the summary file of the COSIS results for the operator to recognize the core state effectively. The MAS S IG file is the COSISMAS input

  1. HTGR Cost Model Users' Manual

    International Nuclear Information System (INIS)

    Gandrik, A.M.

    2012-01-01

    The High Temperature Gas-Cooler Reactor (HTGR) Cost Model was developed at the Idaho National Laboratory for the Next Generation Nuclear Plant Project. The HTGR Cost Model calculates an estimate of the capital costs, annual operating and maintenance costs, and decommissioning costs for a high-temperature gas-cooled reactor. The user can generate these costs for multiple reactor outlet temperatures; with and without power cycles, including either a Brayton or Rankine cycle; for the demonstration plant, first of a kind, or nth of a kind project phases; for a single or four-pack configuration; and for a reactor size of 350 or 600 MWt. This users manual contains the mathematical models and operating instructions for the HTGR Cost Model. Instructions, screenshots, and examples are provided to guide the user through the HTGR Cost Model. This model was design for users who are familiar with the HTGR design and Excel. Modification of the HTGR Cost Model should only be performed by users familiar with Excel and Visual Basic.

  2. User Design: A Case Study on Corporate Change

    Science.gov (United States)

    Pastore, Raymond S.; Carr-Chellman, Alison A.; Lohmann, Neal

    2011-01-01

    The purpose of this study was to examine the effects of implementing user design strategies within the corporate culture. Using a case study design approach, this article explores the change process within a "Fortune" 100 company in which users were given significant decision-making powers. The main focus is on the unique nature of user design in…

  3. Evaluation of User Support: Factors That Affect User Satisfaction With Helpdesks and Helplines

    NARCIS (Netherlands)

    van Velsen, Lex Stefan; Steehouder, M.F.; de Jong, Menno D.T.

    2007-01-01

    In addition to technical documentation, face-to-face helpdesks and telephonic helplines are a powerful means for supporting users of technical products and services. This study investigates the factors that determine user satisfaction with helpdesks and helplines. A survey, based on the SERVQUAL

  4. New generation of docking programs: Supercomputer validation of force fields and quantum-chemical methods for docking.

    Science.gov (United States)

    Sulimov, Alexey V; Kutov, Danil C; Katkova, Ekaterina V; Ilin, Ivan S; Sulimov, Vladimir B

    2017-11-01

    Discovery of new inhibitors of the protein associated with a given disease is the initial and most important stage of the whole process of the rational development of new pharmaceutical substances. New inhibitors block the active site of the target protein and the disease is cured. Computer-aided molecular modeling can considerably increase effectiveness of new inhibitors development. Reliable predictions of the target protein inhibition by a small molecule, ligand, is defined by the accuracy of docking programs. Such programs position a ligand in the target protein and estimate the protein-ligand binding energy. Positioning accuracy of modern docking programs is satisfactory. However, the accuracy of binding energy calculations is too low to predict good inhibitors. For effective application of docking programs to new inhibitors development the accuracy of binding energy calculations should be higher than 1kcal/mol. Reasons of limited accuracy of modern docking programs are discussed. One of the most important aspects limiting this accuracy is imperfection of protein-ligand energy calculations. Results of supercomputer validation of several force fields and quantum-chemical methods for docking are presented. The validation was performed by quasi-docking as follows. First, the low energy minima spectra of 16 protein-ligand complexes were found by exhaustive minima search in the MMFF94 force field. Second, energies of the lowest 8192 minima are recalculated with CHARMM force field and PM6-D3H4X and PM7 quantum-chemical methods for each complex. The analysis of minima energies reveals the docking positioning accuracies of the PM7 and PM6-D3H4X quantum-chemical methods and the CHARMM force field are close to one another and they are better than the positioning accuracy of the MMFF94 force field. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Graphical user interfaces and visually disabled users

    NARCIS (Netherlands)

    Poll, L.H.D.; Waterham, R.P.

    1995-01-01

    From February 1992 until the end of 1993, the authors ((IPO) Institute for Perception Research) participated in a European ((TIDE) Technology Initiative for Disabled and Elderly) project which addressed the problem arising for visually disabled computer-users from the growing use of Graphical User

  6. Dependence and resistance in community mental health care-Negotiations of user participation between staff and users.

    Science.gov (United States)

    Femdal, I; Knutsen, I R

    2017-10-01

    WHAT IS KNOWN ON THE SUBJECT?: Implementation of user participation is described as a change from a paternalistic healthcare system to ideals of democratization where users' voices are heard in relational interplays with health professionals. The ideological shift involves a transition from welfare dependency and professional control towards more active service-user roles with associated rights and responsibilities. A collaborative relationship between users and professionals in mental health services is seen as important by both parties. Nevertheless, the health professionals find it challenging in practice to reorient their roles and to find productive ways to cooperate. WHAT THIS PAPER ADDS TO EXISTING KNOWLEDGE?: This study illuminates how user participation is negotiated and involves multiple and shifting subject positions in the collaboration between users and professionals in community mental health care. By taking different positions, the relationship between users and professionals develops through dynamic interaction. This study challenges understandings of equality and implicit "truths" in user participation by illuminating subtle forms of power and dilemmas that arise in user-professional negotiations. WHAT ARE THE IMPLICATIONS FOR PRACTICE?: Instead of denying the appearance of power, it is important to question the execution of power in the interplay between users and professionals. Focusing on the negotiation processes between users and professionals is important for increasing reflection on and improving understanding of the dynamic in collaboration and speech. By focusing on negotiations, power can be used in productive ways in user-professional relationships. Introduction Implementation of user participation is considered important in today's mental health care. Research shows, however, that user participation lacks clarity and provokes uncertainty regarding shifting roles. Aim To investigate negotiation of user participation in a microstudy of

  7. Developing powers

    Science.gov (United States)

    Showstack, Randy

    Three new reports commissioned by the Pew Center on Global Climate Change examine the electric power sectors in Argentina, Brazil, and China, and the potential impact that energy use in each country has on climate change.In 1999, Argentina voluntarily agreed to lower its greenhouse gas emissions to 2 10% below projected emissions for 2012. The report looks at additional steps that could further reduce emissions, including adopting policies that favor renewable energy sources and nuclear power, and increasing energy efficiency by end-users.

  8. Accelerator facilities users' guide

    International Nuclear Information System (INIS)

    Walter, H.C.; Adrion, L.; Frosch, R.; Salzmann, M.

    1994-07-01

    In 1981 the ''Green Book'' of SIN was distributed, a User Handbook serving the needs of people already working at SIN as well as informing new users about our installations. An update of the Green Book is necessary because many beams have disappeared, been modified or added, and the installation has been upgraded in intensity and versatility quite considerably. The spectrum of users has shifted away from nuclear and particle physics; applications in medicine, solid state physics and materials science have gained in importance. This Users' Guide is intended to inform our users about the changes, and to interest potential new users in coming to PSI. (author) figs., tabs

  9. User Driven Image Stacking for ODI Data and Beyond via a Highly Customizable Web Interface

    Science.gov (United States)

    Hayashi, S.; Gopu, A.; Young, M. D.; Kotulla, R.

    2015-09-01

    While some astronomical archives have begun serving standard calibrated data products, the process of producing stacked images remains a challenge left to the end-user. The benefits of astronomical image stacking are well established, and dither patterns are recommended for almost all observing targets. Some archives automatically produce stacks of limited scientific usefulness without any fine-grained user or operator configurability. In this paper, we present PPA Stack, a web based stacking framework within the ODI - Portal, Pipeline, and Archive system. PPA Stack offers a web user interface with built-in heuristics (based on pointing, filter, and other metadata information) to pre-sort images into a set of likely stacks while still allowing the user or operator complete control over the images and parameters for each of the stacks they wish to produce. The user interface, designed using AngularJS, provides multiple views of the input dataset and parameters, all of which are synchronized in real time. A backend consisting of a Python application optimized for ODI data, wrapped around the SWarp software, handles the execution of stacking workflow jobs on Indiana University's Big Red II supercomputer, and the subsequent ingestion of the combined images back into the PPA archive. PPA Stack is designed to enable seamless integration of other stacking applications in the future, so users can select the most appropriate option for their science.

  10. User interface user's guide for HYPGEN

    Science.gov (United States)

    Chiu, Ing-Tsau

    1992-01-01

    The user interface (UI) of HYPGEN is developed using Panel Library to shorten the learning curve for new users and provide easier ways to run HYPGEN for casual users as well as for advanced users. Menus, buttons, sliders, and type-in fields are used extensively in UI to allow users to point and click with a mouse to choose various available options or to change values of parameters. On-line help is provided to give users information on using UI without consulting the manual. Default values are set for most parameters and boundary conditions are determined by UI to further reduce the effort needed to run HYPGEN; however, users are free to make any changes and save it in a file for later use. A hook to PLOT3D is built in to allow graphics manipulation. The viewpoint and min/max box for PLOT3D windows are computed by UI and saved in a PLOT3D journal file. For large grids which take a long time to generate on workstations, the grid generator (HYPGEN) can be run on faster computers such as Crays, while UI stays at the workstation.

  11. Desalination Economic Evaluation Program (DEEP). User's manual

    International Nuclear Information System (INIS)

    2000-01-01

    DEEP (formerly named ''Co-generation and Desalination Economic Evaluation'' Spreadsheet, CDEE) has been developed originally by General Atomics under contract, and has been used in the IAEA's feasibility studies. For further confidence in the software, it was validated in March 1998. After that, a user friendly version has been issued under the name of DEEP at the end of 1998. DEEP output includes the levelised cost of water and power, a breakdown of cost components, energy consumption and net saleable power for each selected option. Specific power plants can be modelled by adjustment of input data including design power, power cycle parameters and costs

  12. IDRC Connect User Guide

    International Development Research Centre (IDRC) Digital Library (Canada)

    Kristina Kamichaitis

    IDRC Extranet home page, which is an umbrella for a number of applications available to IDRC external users. ... IDRC Connect is not formatted for mobile users. ..... Thesis. • Training Material. • Website. • Working Paper. • Workshop Report.

  13. Implicit User Interest Profile

    CERN Document Server

    Chan, K

    2002-01-01

    User interest profile presents items that the users are interested in. Typically those items can be listed or grouped. Listing is good but it does not possess interests at different abstraction levels - the higher-level interests are more general, while the lower-level ones are more specific. Furthermore, more general interests, in some sense, correspond to longer-term interests, while more specific interests correspond to shorter-term interests. This hierarchical user interest profile has obvious advantages: specifying user's specific interests and general interests and representing their relationships. Current user interest profile structures mostly do not use implicit method, nor use an appropriate clustering algorithm especially for conceptually hierarchical structures. This research studies building a hierarchical user interest profile (HUIP) and the hierarchical divisive algorithm (HDC). Several users visit hundreds of web pages and each page is recorded in each users profile. These web pages are used t...

  14. Power transformers quality assurance

    CERN Document Server

    Dasgupta, Indrajit

    2009-01-01

    About the Book: With the view to attain higher reliability in power system operation, the quality assurance in the field of distribution and power transformers has claimed growing attention. Besides new developments in the material technology and manufacturing processes of transformers, regular diagnostic testing and maintenance of any engineering product may be ascertained by ensuring: right selection of materials and components and their quality checks. application of correct manufacturing processes any systems engineering. the user`s awareness towards preventive maintenance. The

  15. TOUGH2 User's Guide Version 2

    International Nuclear Information System (INIS)

    Pruess, K.; Oldenburg, C.M.; Moridis, G.J.

    1999-01-01

    TOUGH2 is a numerical simulator for nonisothermal flows of multicomponent, multiphase fluids in one, two, and three-dimensional porous and fractured media. The chief applications for which TOUGH2 is designed are in geothermal reservoir engineering, nuclear waste disposal, environmental assessment and remediation, and unsaturated and saturated zone hydrology. TOUGH2 was first released to the public in 1991; the 1991 code was updated in 1994 when a set of preconditioned conjugate gradient solvers was added to allow a more efficient solution of large problems. The current Version 2.0 features several new fluid property modules and offers enhanced process modeling capabilities, such as coupled reservoir-wellbore flow, precipitation and dissolution effects, and multiphase diffusion. Numerous improvements in previously released modules have been made and new user features have been added, such as enhanced linear equation solvers, and writing of graphics files. The T2VOC module for three-phase flows of water, air and a volatile organic chemical (VOC), and the T2DM module for hydrodynamic dispersion in 2-D flow systems have been integrated into the overall structure of the code and are included in the Version 2.0 package. Data inputs are upwardly compatible with the previous version. Coding changes were generally kept to a minimum, and were only made as needed to achieve the additional functionalities desired. TOUGH2 is written in standard FORTRAN77 and can be run on any platform, such as workstations, PCs, Macintosh, mainframe and supercomputers, for which appropriate FORTRAN compilers are available. This report is a self-contained guide to application of TOUGH2 to subsurface flow problems. It gives a technical description of the TOUGH2 code, including a discussion of the physical processes modeled, and the mathematical and numerical methods used. Illustrative sample problems are presented along with detailed instructions for preparing input data

  16. NPAS Users Guide

    International Nuclear Information System (INIS)

    1984-01-01

    This NPAS Users Guide is primarily intended as a source of information about policies, procedures, and facilities appropriate for users in the program of Nuclear Physics at SLAC (NPAS). General policies and practices are described, the preparation of proposals is discussed, and the services for users is outlined. SLAC experimental facilities are described, and contacts are listed

  17. New Mexico High School Supercomputing Challenge, 1990--1995: Five years of making a difference to students, teachers, schools, and communities. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Foster, M.; Kratzer, D.

    1996-02-01

    The New Mexico High School Supercomputing Challenge is an academic program dedicated to increasing interest in science and math among high school students by introducing them to high performance computing. This report provides a summary and evaluation of the first five years of the program, describes the program and shows the impact that it has had on high school students, their teachers, and their communities. Goals and objectives are reviewed and evaluated, growth and development of the program are analyzed, and future directions are discussed.

  18. Beginning Power BI with Excel 2013 self-service business intelligence using Power Pivot, Power View, Power Query, and Power Map

    CERN Document Server

    Clark, Dan

    2014-01-01

    Understanding your company's data has never been easier than with Microsoft's new Power BI package for Excel 2013. Consisting of four powerful tools-Power Pivot, Power View, Power Query and Power Maps-Power BI makes self-service business intelligence a reality for a wide range of users, bridging the traditional gap between Excel users, business analysts and IT experts and making it easier for everyone to work together to build the data models that can give you game-changing insights into your business. Beginning Power BI with Excel 2013 guides you step by step through the process of analyzin

  19. DOSFAC2 user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Young, M.L.; Chanin, D.

    1997-12-01

    This document describes the DOSFAC2 code, which is used for generating dose-to-source conversion factors for the MACCS2 code. DOSFAC2 is a revised and updated version of the DOSFAC code that was distributed with version 1.5.11 of the MACCS code. included are (1) an overview and background of DOSFAC2, (2) a summary of two new functional capabilities, and (3) a user`s guide. 20 refs., 5 tabs.

  20. Community-aware user profile enrichment in folksonomy.

    Science.gov (United States)

    Xie, Haoran; Li, Qing; Mao, Xudong; Li, Xiaodong; Cai, Yi; Rao, Yanghui

    2014-10-01

    In the era of big data, collaborative tagging (a.k.a. folksonomy) systems have proliferated as a consequence of the growth of Web 2.0 communities. Constructing user profiles from folksonomy systems is useful for many applications such as personalized search and recommender systems. The identification of latent user communities is one way to better understand and meet user needs. The behavior of users is highly influenced by the behavior of their neighbors or community members, and this can be utilized in constructing user profiles. However, conventional user profiling techniques often encounter data sparsity problems as data from a single user is insufficient to build a powerful profile. Hence, in this paper we propose a method of enriching user profiles based on latent user communities in folksonomy data. Specifically, the proposed approach contains four sub-processes: (i) tag-based user profiles are extracted from a folksonomy tripartite graph; (ii) a multi-faceted folksonomy graph is constructed by integrating tag and image affinity subgraphs with the folksonomy tripartite graph; (iii) random walk distance is used to unify various relationships and measure user similarities; (iv) a novel prototype-based clustering method based on user similarities is used to identify user communities, which are further used to enrich the extracted user profiles. To evaluate the proposed method, we conducted experiments using a public dataset, the results of which show that our approach outperforms previous ones in user profile enrichment. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Programming the iPhone User Experience

    CERN Document Server

    Boudreaux, Toby

    2009-01-01

    Apple's iPhone and iPod Touch not only feature the world's most powerful mobile operating system, they also usher in a new standard of human-computer interaction through gestural interfaces and multi-touch navigation. This book provides you with a hands-on, example-driven tour of UIKit, Apple's user interface toolkit, and includes common design patterns to help you create new iPhone and iPod Touch user experiences. Using Apple's Cocoa Touch framework, you'll learn how to build applications that respond in unique ways when users tap, slide, swipe, tilt, shake, or pinch the screen. Programmin

  2. Super-computer architecture

    CERN Document Server

    Hockney, R W

    1977-01-01

    This paper examines the design of the top-of-the-range, scientific, number-crunching computers. The market for such computers is not as large as that for smaller machines, but on the other hand it is by no means negligible. The present work-horse machines in this category are the CDC 7600 and IBM 360/195, and over fifty of the former machines have been sold. The types of installation that form the market for such machines are not only the major scientific research laboratories in the major countries-such as Los Alamos, CERN, Rutherford laboratory-but also major universities or university networks. It is also true that, as with sports cars, innovations made to satisfy the top of the market today often become the standard for the medium-scale computer of tomorrow. Hence there is considerable interest in examining present developments in this area. (0 refs).

  3. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Weingarten, D.

    1986-01-01

    GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating-point processors arrangedin a modified SIMD architecture. Each has space for 2 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 Gflops. The floating-point processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics

  4. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Denneau, M.; Weingarten, D.

    1985-01-01

    GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating- point processors arranged in a modified SIMD architecture. Each has space for 2 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 Gflops. The floating-point processors are interconnected by a dynamically reconfigurable nonblocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics

  5. Supercomputer debugging workshop `92

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.S.

    1993-02-01

    This report contains papers or viewgraphs on the following topics: The ABCs of Debugging in the 1990s; Cray Computer Corporation; Thinking Machines Corporation; Cray Research, Incorporated; Sun Microsystems, Inc; Kendall Square Research; The Effects of Register Allocation and Instruction Scheduling on Symbolic Debugging; Debugging Optimized Code: Currency Determination with Data Flow; A Debugging Tool for Parallel and Distributed Programs; Analyzing Traces of Parallel Programs Containing Semaphore Synchronization; Compile-time Support for Efficient Data Race Detection in Shared-Memory Parallel Programs; Direct Manipulation Techniques for Parallel Debuggers; Transparent Observation of XENOOPS Objects; A Parallel Software Monitor for Debugging and Performance Tools on Distributed Memory Multicomputers; Profiling Performance of Inter-Processor Communications in an iWarp Torus; The Application of Code Instrumentation Technology in the Los Alamos Debugger; and CXdb: The Road to Remote Debugging.

  6. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Denneau, M.; Weingarten, D.

    1985-01-01

    GF11 is a parallel computer currently under construction at the Yorktown Research Center. The machine incorporates 576 floating-point processors arranged in a modified SIMD architecture. Each processor has space for 2 Mbytes of memory and is capable of 20 MFLOPS, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 GFLOPS. The floating-point processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics, a proposed theory of the elementary particles which participate in nuclear interactions

  7. User interface design considerations

    DEFF Research Database (Denmark)

    Andersen, Simon Engedal; Jakobsen, Arne; Rasmussen, Bjarne D.

    1999-01-01

    and output variables. This feature requires special attention when designing the user interface and a special approach for controlling the user selection of input and output variables are developed. To obtain a consistent system description the different input variables are grouped corresponding......When designing a user interface for a simulation model there are several important issues to consider: Who is the target user group, and which a priori information can be expected. What questions do the users want answers to and what questions are answered using a specific model?When developing...... the user interface of EESCoolTools these issues led to a series of simulation tools each with a specific purpose and a carefully selected set of input and output variables. To allow a more wide range of questions to be answered by the same model, the user can change between different sets of input...

  8. DIMAC program user's manual

    International Nuclear Information System (INIS)

    Lee, Byoung Oon; Song, Tae Young

    2003-11-01

    DIMAC (A DIspersion Metallic fuel performance Analysis Code) is a computer program for simulating the behavior of dispersion fuel rods under normal operating conditions of HYPER. It computes the one-dimensional temperature distribution and the thermo-mechanical characteristics of fuel rod under the steady state operation condition, including the swelling and rod deformation. DIMAC was developed based on the experience of research reactor fuel. DIMAC consists of the temperature calculation module, the mechanical swelling calculation module, and the fuel deformation calculation module in order to predict the deformation of a dispersion fuel as a function of power history. Because there are a little of available U-TRU-Zr or TRU-Zr characteristics, the material data of U-Pu-Zr or Pu-Zr are used for those of U-TRU-Zr or TRU-Zr. This report is mainly intended as a user's manual for the DIMAC code. The general description on this code, the description on input parameter, the description on each subroutine, the sample problem and the sample input and partial output are written in this report

  9. DIMAC program user's manual

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Byoung Oon; Song, Tae Young

    2003-11-01

    DIMAC (A DIspersion Metallic fuel performance Analysis Code) is a computer program for simulating the behavior of dispersion fuel rods under normal operating conditions of HYPER. It computes the one-dimensional temperature distribution and the thermo-mechanical characteristics of fuel rod under the steady state operation condition, including the swelling and rod deformation. DIMAC was developed based on the experience of research reactor fuel. DIMAC consists of the temperature calculation module, the mechanical swelling calculation module, and the fuel deformation calculation module in order to predict the deformation of a dispersion fuel as a function of power history. Because there are a little of available U-TRU-Zr or TRU-Zr characteristics, the material data of U-Pu-Zr or Pu-Zr are used for those of U-TRU-Zr or TRU-Zr. This report is mainly intended as a user's manual for the DIMAC code. The general description on this code, the description on input parameter, the description on each subroutine, the sample problem and the sample input and partial output are written in this repo0008.

  10. LCS Users Manual

    International Nuclear Information System (INIS)

    Redd, A.J.; Ignat, D.W.

    1998-01-01

    The Lower Hybrid Simulation Code (LSC) is a computational model of lower hybrid current drive in the presence of an electric field. Details of geometry, plasma profiles, and circuit equations are treated. Two-dimensional velocity space effects are approximated in a one-dimensional Fokker-Planck treatment. The LSC was originally written to be a module for lower hybrid current drive called by the Tokamak Simulation Code (TSC), which is a numerical model of an axisymmetric tokamak plasma and the associated control systems. The TSC simulates the time evolution of a free boundary plasma by solving the MHD equations on a rectangular computational grid. The MHD equations are coupled to the external circuits (representing poloidal field coils) through the boundary conditions. The code includes provisions for modeling the control system, external heating, and fusion heating. The LSC module can also be called by the TRANSP code. TRANSP represents the plasma with an axisymmetric, fixed-boundary model and focuses on calculation of plasma transport to determine transport coefficients from data on power inputs and parameters reached. This manual covers the basic material needed to use the LSC. If run in conjunction with TSC, the ''TSC Users Manual'' should be consulted. If run in conjunction with TRANSP, on-line documentation will be helpful. A theoretical background of the governing equations and numerical methods is given. Information on obtaining, compiling, and running the code is also provided

  11. An Object-Oriented Architecture for User Interface Management in Distributed Applications

    OpenAIRE

    Denzer, Ralf

    2017-01-01

    User interfaces for large distributed applications have to handle specific problems: the complexity of the application itself and the integration of online-data into the user interface. A main task of the user interface architecture is to provide powerful tools to design and augment the end-user system easily, hence giving the designer more time to focus on user requirements. Our experiences developing a user interface system for a process control room showed that a lot of time during the dev...

  12. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  13. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu

    2013-07-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  14. Computational fluid dynamics: complex flows requiring supercomputers. January 1975-July 1988 (Citations from the INSPEC: Information Services for the Physics and Engineering Communities data base). Report for January 1975-July 1988

    International Nuclear Information System (INIS)

    1988-08-01

    This bibliography contains citations concerning computational fluid dynamics (CFD), a new method in computational science to perform complex flow simulations in three dimensions. Applications include aerodynamic design and analysis for aircraft, rockets, and missiles, and automobiles; heat-transfer studies; and combustion processes. Included are references to supercomputers, array processors, and parallel processors where needed for complete, integrated design. Also included are software packages and grid-generation techniques required to apply CFD numerical solutions. Numerical methods for fluid dynamics, not requiring supercomputers, are found in a separate published search. (Contains 83 citations fully indexed and including a title list.)

  15. International user studies

    DEFF Research Database (Denmark)

    Nielsen, Lene; Madsen, Sabine; Jensen, Iben

    In this report, we present the results of a research project about international user studies. The project has been carried out by researchers from the Center for Persona Research and –Application, The IT University in Copenhagen and the Department of Learning and Philosophy, Aalborg University...... in Sydhavnen, and it is funded by InfinIT. Based on a qualitative interview study with 15 user researchers from 11 different companies, we have investigated how companies collect and present data about users on international markets. Key findings are: Companies do not collect data about end users in all...... the countries/regions they operate in. Instead, they focus on a few strategic markets. International user studies tend to be large-scale studies that involve the effort of many both internal and external/local human resources. The studies typically cover 2-4 countries/regions and many end users in each country...

  16. Application of the finite element method in the field of tension of technical progress und user requirements. Pt. 2. Review of development with applications in railway technology and power station technology

    International Nuclear Information System (INIS)

    Rudolph, Juergen; Bergholz, Steffen; Lomoth, Hans-Juergen

    2010-01-01

    A historical review and selected examples ilustrate the vast importance of a flexible programming environment as offered, e.g., by the ANSYS Parametric Design Language (APDL) and the implementation of batch jobs in FEA. The batch job approach is particularly well suited for engineering problems and supports quality assurance. Implementation in FE program systems should enable the user to use all available software options at any given time for effective analysis and postprocessing workflow. It is shown that the implementation of an automatic linearisation routine and the evaluation of loads referred to effective areas are particularly important functionalities. The user should be able to implement his complete workflow in the programming environment. If a decision must be made, this consideration should be given priority over user-friendliness of the program. (orig.)

  17. MADS Users' Guide

    Science.gov (United States)

    Moerder, Daniel D.

    2014-01-01

    MADS (Minimization Assistant for Dynamical Systems) is a trajectory optimization code in which a user-specified performance measure is directly minimized, subject to constraints placed on a low-order discretization of user-supplied plant ordinary differential equations. This document describes the mathematical formulation of the set of trajectory optimization problems for which MADS is suitable, and describes the user interface. Usage examples are provided.

  18. Measuring user engagement

    CERN Document Server

    Lalmas, Mounia; Yom-Tov, Elad

    2014-01-01

    User engagement refers to the quality of the user experience that emphasizes the positive aspects of interacting with an online application and, in particular, the desire to use that application longer and repeatedly. User engagement is a key concept in the design of online applications (whether for desktop, tablet or mobile), motivated by the observation that successful applications are not just used, but are engaged with. Users invest time, attention, and emotion in their use of technology, and seek to satisfy pragmatic and hedonic needs. Measurement is critical for evaluating whether online

  19. Web-users identification

    OpenAIRE

    Kazakov, Ilja

    2017-01-01

    The main goal of this thesis is to determine what parameters (of a browser fingerprint) are necessary in order to identify a user in a specific interval (number of users). The secondary goal is to try to assign fingerprints to users. The final goal is to find out the weight of each parameter (usefulness). The thesis consists of two main parts: data collection (up to 10000 users) and analysis of the data. As a result, with help of an implemented JavaScript plugin, a database, which consists of...

  20. User participation in implementation

    DEFF Research Database (Denmark)

    Fleron, Benedicte; Rasmussen, Rasmus; Simonsen, Jesper

    2012-01-01

    Systems development has been claimed to benefit from user participation, yet user participation in implementation activities may be more common and is a growing focus of participatory-design work. We investigate the effect of the extensive user participation in the implementation of a clinical...... experienced more uncertainty and frustration than management and non-participating staff, especially concerning how to run an implementation process and how to understand and utilize the configuration possibilities of the system. This suggests that user participation in implementation introduces a need...

  1. MLF user program

    International Nuclear Information System (INIS)

    Kamiyama, Takashi; Ikeda, Yujiro

    2008-01-01

    The user program of J-PARC/MLF is overviewed. Since MLF will be one of the major neutron facilities in the world, an international standard system for the user program is expected. It is also expected to establish a system to promote users from industries. The MLF user program is based on the IUPAP recommendation on the selection of proposals. Both open and closed accesses, biannual, regular, rapid accesses, etc. will be provided. All the features in the system are being introduced to maximize both scientific and engineering outputs from MLF. (author)

  2. HTGR Application Economic Model Users' Manual

    International Nuclear Information System (INIS)

    Gandrik, A.M.

    2012-01-01

    The High Temperature Gas-Cooled Reactor (HTGR) Application Economic Model was developed at the Idaho National Laboratory for the Next Generation Nuclear Plant Project. The HTGR Application Economic Model calculates either the required selling price of power and/or heat for a given internal rate of return (IRR) or the IRR for power and/or heat being sold at the market price. The user can generate these economic results for a range of reactor outlet temperatures; with and without power cycles, including either a Brayton or Rankine cycle; for the demonstration plant, first of a kind, or nth of a kind project phases; for up to 16 reactor modules; and for module ratings of 200, 350, or 600 MWt. This users manual contains the mathematical models and operating instructions for the HTGR Application Economic Model. Instructions, screenshots, and examples are provided to guide the user through the HTGR Application Economic Model. This model was designed for users who are familiar with the HTGR design and Excel and engineering economics. Modification of the HTGR Application Economic Model should only be performed by users familiar with the HTGR and its applications, Excel, and Visual Basic.

  3. Demonstrator 1: User Interface and User Functions

    DEFF Research Database (Denmark)

    Gram, Christian

    1999-01-01

    Describes the user interface and its functionality in a prototype system used for a virtual seminar session. The functionality is restricted to what is needed for a distributed seminar discussion among not too many people. The system is designed to work with the participants distributed at several...

  4. MicroPRIS user's guide

    International Nuclear Information System (INIS)

    1991-01-01

    MicroPRIS is a new service of the IAEA Power Reactor Information System (PRIS) for the Member States of IAEA. MicroPRIS makes the IAEA database on nuclear power plants and their operating experience available to Member States on computer diskettes in a form readily accessible by standard commercially available personal computer packages. The aim of this publication is to provide the users of the PC version of PRIS data with description of the subset of the full PRIS database contained in MicroPRIS (release 1990), description of files and file structures, field descriptions and definitions, extraction and selection guide and with the method of calculation of a number of important performance indicators used by the IAEA

  5. Heavy-tailed distribution of the SSH Brute-force attack duration in a multi-user environment

    Science.gov (United States)

    Lee, Jae-Kook; Kim, Sung-Jun; Park, Chan Yeol; Hong, Taeyoung; Chae, Huiseung

    2016-07-01

    Quite a number of cyber-attacks to be place against supercomputers that provide highperformance computing (HPC) services to public researcher. Particularly, although the secure shell protocol (SSH) brute-force attack is one of the traditional attack methods, it is still being used. Because stealth attacks that feign regular access may occur, they are even harder to detect. In this paper, we introduce methods to detect SSH brute-force attacks by analyzing the server's unsuccessful access logs and the firewall's drop events in a multi-user environment. Then, we analyze the durations of the SSH brute-force attacks that are detected by applying these methods. The results of an analysis of about 10 thousands attack source IP addresses show that the behaviors of abnormal users using SSH brute-force attacks are based on human dynamic characteristics of a typical heavy-tailed distribution.

  6. Sharing solutions - The users' group approach

    International Nuclear Information System (INIS)

    Kania, G.; Winter, K.

    1991-01-01

    Regulatory compliance, operating efficiency, and plant-life extension are common goals shared by all nuclear power plants. To achieve these goals, nuclear utilities must be proactive and responsive to the regulatory agencies, work together with each other in the sharing of operating experiences and solution to problems, and develop long-term working relationships with an even smaller number of quality suppliers. Users' and owners' groups are one of the most effective means of accomplishing these objectives. Users' groups facilitate communication between nuclear power plants and provide an interactive vendor interface. Both the utilities and suppliers benefit through shared information and improved customer feedback. This paper describes the evolution and experiences of the Sorrento Electronics (SE) Radiation Monitoring System (RMS) Users' Group. The authors highlight the group's past successes and plans for the future

  7. EMI New User Communities

    CERN Document Server

    Riedel, M

    2013-01-01

    This document provides pieces of information about new user communities that directly or indirectly take advantage of EMI Products. Each user community is described via one specific EMI product use case to understand and communicate the current usage of EMI Products in practice.

  8. Additional user needs

    International Nuclear Information System (INIS)

    Rorschach, H.E.; Hayter, J.B.

    1986-01-01

    This paper summarizes the conclusions of a discussion group on users' needs held at the Workshop on an Advanced Steady-State Neutron Facility. The discussion was devoted to reactor characteristics, special facilities and siting considerations suggested by user needs. (orig.)

  9. Users in Persistant Action

    DEFF Research Database (Denmark)

    Christiansen, John K.; Gasparin, Marta; Varnes, Claus J.

    2012-01-01

    of the hybrid collective to include the press and distribution channels to want it back. All actors in collective actions can become lead users when supported by establishing alliances. This perspective is different from Von Hippel (1986) who is claiming that the trend needs to be defined before the lead users...

  10. User Interface History

    DEFF Research Database (Denmark)

    Jørgensen, Anker Helms; Myers, Brad A

    2008-01-01

    User Interfaces have been around as long as computers have existed, even well before the field of Human-Computer Interaction was established. Over the years, some papers on the history of Human-Computer Interaction and User Interfaces have appeared, primarily focusing on the graphical interface e...

  11. Users are problem solvers!

    NARCIS (Netherlands)

    Brouwer-Janse, M.D.

    1991-01-01

    Most formal problem-solving studies use verbal protocol and observational data of problem solvers working on a task. In user-centred product-design projects, observational studies of users are frequently used too. In the latter case, however, systematic control of conditions, indepth analysis and

  12. Understanding and Mastering Dynamics in Computing Grids Processing Moldable Tasks with User-Level Overlay

    CERN Document Server

    Moscicki, Jakub Tomasz

    Scientic communities are using a growing number of distributed systems, from lo- cal batch systems, community-specic services and supercomputers to general-purpose, global grid infrastructures. Increasing the research capabilities for science is the raison d'^etre of such infrastructures which provide access to diversied computational, storage and data resources at large scales. Grids are rather chaotic, highly heterogeneous, de- centralized systems where unpredictable workloads, component failures and variability of execution environments are commonplace. Understanding and mastering the hetero- geneity and dynamics of such distributed systems is prohibitive for end users if they are not supported by appropriate methods and tools. The time cost to learn and use the interfaces and idiosyncrasies of dierent distributed environments is another challenge. Obtaining more reliable application execution times and boosting parallel speedup are important to increase the research capabilities of scientic communities. L...

  13. User Frustrations as Opportunities

    Directory of Open Access Journals (Sweden)

    Michael Weiss

    2012-04-01

    Full Text Available User frustrations are an excellent source of new product ideas. Starting with this observation, this article describes an approach that entrepreneurs can use to discover business opportunities. Opportunity discovery starts with a problem that the user has, but may not be able to articulate. User-centered design techniques can help elicit those latent needs. The entrepreneur should then try to understand how users are solving their problem today, before proposing a solution that draws on the unique skills and technical capabilities available to the entrepreneur. Finally, an in-depth understanding of the user allows the entrepreneur to hone in on the points of difference and resonance that are the foundation of a strong customer value proposition.

  14. User interface support

    Science.gov (United States)

    Lewis, Clayton; Wilde, Nick

    1989-01-01

    Space construction will require heavy investment in the development of a wide variety of user interfaces for the computer-based tools that will be involved at every stage of construction operations. Using today's technology, user interface development is very expensive for two reasons: (1) specialized and scarce programming skills are required to implement the necessary graphical representations and complex control regimes for high-quality interfaces; (2) iteration on prototypes is required to meet user and task requirements, since these are difficult to anticipate with current (and foreseeable) design knowledge. We are attacking this problem by building a user interface development tool based on extensions to the spreadsheet model of computation. The tool provides high-level support for graphical user interfaces and permits dynamic modification of interfaces, without requiring conventional programming concepts and skills.

  15. The PANTHER User Experience

    Energy Technology Data Exchange (ETDEWEB)

    Coram, Jamie L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Morrow, James D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Perkins, David Nikolaus [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This document describes the PANTHER R&D Application, a proof-of-concept user interface application developed under the PANTHER Grand Challenge LDRD. The purpose of the application is to explore interaction models for graph analytics, drive algorithmic improvements from an end-user point of view, and support demonstration of PANTHER technologies to potential customers. The R&D Application implements a graph-centric interaction model that exposes analysts to the algorithms contained within the GeoGraphy graph analytics library. Users define geospatial-temporal semantic graph queries by constructing search templates based on nodes, edges, and the constraints among them. Users then analyze the results of the queries using both geo-spatial and temporal visualizations. Development of this application has made user experience an explicit driver for project and algorithmic level decisions that will affect how analysts one day make use of PANTHER technologies.

  16. RADTRAN 4: User guide

    International Nuclear Information System (INIS)

    Neuhauser, K.S.; Kanipe, F.L.

    1992-01-01

    RADTRAN 4 is used to evaluate radiological consequences of incident-free transportation, as well as the radiological risks from vehicular accidents occurring during transportation. This User Guide is Volume 3 in a series of four volume of the documentation of the RADTRAN 4 computer code for transportation risk analysis. The other three volumes are Volume 1, the Executive Summary; Volume 2, the Technical Manual; and Volume 4, the Programmer's Manual. The theoretical and calculational basis for the operations performed by RADTRAN 4 are discussed in Volume 2. Throughout this User Guide the reader will be referred to Volume 2 for detailed discussions of certain RADTRAN features. This User Guide supersedes the document ''RADTRAN III'' by Madsen et al. (1983). This RADTRAN 4 User Guide specifies and describes the required data, control inputs, input sequences, user options, program limitations, and other activities necessary for execution of the RADTRAN 4 computer code

  17. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    Science.gov (United States)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed

  18. Pulsed power

    International Nuclear Information System (INIS)

    Anon.

    1978-01-01

    Options for EBFA-I were narrowed as data became available from Proto II, MITE and power flow research. The solid dielectric capacitors proposed for intermediate stores have been eliminated for EBFA because of low reliability. Water capacitors based on data from Proto II and Hydra will be used on EBFA. Improved SF-6 switching data from Proto II shows that present parameters are adequate for EBFA. A switch jitter of 3 ns with reliability exceeding 0.986 was demonstrated. Proto II has achieved the design output and is now a user oriented accelerator. Several desirable features of the disc accelerator were proven. Initial magnetic insulation experiments on a 1.5 m-long-triplate show small energy and power losses. Theoretical understanding of magnetic insulation was greatly enhanced and agreement between projections and experiment were obtained

  19. Research on Battery Energy Storage System Based on User Side

    Science.gov (United States)

    Wang, Qian; Zhang, Yichi; Yun, Zejian; Wang, Xuguang; Zhang, Dong; Bian, Di

    2018-01-01

    This paper introduces the effect of user side energy storage on the user side and the network side, a battery energy storage system for the user side is designed. The main circuit topology of the battery energy storage system based on the user side is given, the structure is mainly composed of two parts: DC-DC two-way half bridge converter and DC-AC two-way converter, a control strategy combining battery charging and discharging characteristics is proposed to decouple the grid side and the energy storage side, and the block diagram of the charging and discharging control of the energy storage system is given. The simulation results show that the battery energy storage system of the user side can not only realize reactive power compensation of low-voltage distribution network, but also improve the power quality of the users.

  20. Satellite communication from user to user

    Science.gov (United States)

    Gern, Manfred

    Satellite communication systems which allow a multitude of user-to-user, point-to-point, and multipoint connections, are presented. The bit rates are 64 kbit/sec and multiples, up to 1.92 Mbit/sec. If required, the ground-stations are installed at the customer's site or at suitable locations in order to serve several customers. However, technical requirements for station location have also to be fulfulled, in order to avoid interference with terrestrial radio services. The increasing number of participants to Satellite Multi Service and INTELSAT Business Services imposes the solution of the problem of communication using cheap techniques. The changes of the German Federal Post Office also permit the economic use of satellite radio techniques for short distances.

  1. Facility - Radiation Source Features and User Applications

    International Nuclear Information System (INIS)

    Gover, A.; Abramovich, A.; Eichenbaum, A.L.; Kanter, M.; Sokolowski, J.; Yahalom, A.; Shiloh, J.; Schnitzer, I.; Pinhasi, Y.

    1999-01-01

    Recent measurements of the radiation characteristics of the tandem FEL prove .that the device operates as a high quality, tunable radiation source in the mm wave regime. Tuning range of 60% around a central frequency of 100 GHz was demonstrated by varying the tandem accelerator energy from 1 to 1.5 MeV with 1-1.5 Amp. Beam current. Fourier transform limited linewidth of Δ f/f -5 was measured in single-mode lasing operation. The FEL power in pulse operation (10μsec) was 10 kWatt. Operating the FEL at high repetition rate with 0.1 to 1 mSec pulses will make it possible to obtain high average power (1 kWatt) and narrow linewidth (10 -7 ). Based ,on these exceptional properties of the FEL as a high quality spectroscopic tool and as a source of high average power radiation, the FEL consortium, supported by a body of 10 radiation user groups from various universities and research institutes, embark on a new project for development of an Israeli FEL radiation user laboratory. The laboratory is presently in a design and building stage in the academic campus in Ariel. The FEL will be moved to this laboratory after completion of X-ray protection structure in the allocated building. In the first phase of development, the radiation user laboratory will consist of three user stations: a. Spectroscopic station (low average power). Material studies are planned in the fields of H.T.S.C., submicron semiconductor devices, gases. b. Material processing station (high average power). Experiments are planned in the fields of thin film ceramic sintering (including H.T.S.C.), functionally graded materials, surface treatment of metals, interaction with biological tissues. c. Atmospheric study station. Experiments are planned in the fields of aerosol, dust and clouds mapping, remote sensing of gases, wide-band mm wave communication The FEL experimental results and the user laboratory features will be described

  2. The design and implementation of cost-effective algorithms for direct solution of banded linear systems on the vector processor system 32 supercomputer

    Science.gov (United States)

    Samba, A. S.

    1985-01-01

    The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.

  3. Use of QUADRICS supercomputer as embedded simulator in emergency management systems; Utilizzo del calcolatore QUADRICS come simulatore in linea in un sistema di gestione delle emergenze

    Energy Technology Data Exchange (ETDEWEB)

    Bove, R.; Di Costanzo, G.; Ziparo, A. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dip. Energia

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system.

  4. Sandia`s network for Supercomputing `94: Linking the Los Alamos, Lawrence Livermore, and Sandia National Laboratories using switched multimegabit data service

    Energy Technology Data Exchange (ETDEWEB)

    Vahle, M.O.; Gossage, S.A.; Brenkosh, J.P. [Sandia National Labs., Albuquerque, NM (United States). Advanced Networking Integration Dept.

    1995-01-01

    Supercomputing `94, a high-performance computing and communications conference, was held November 14th through 18th, 1994 in Washington DC. For the past four years, Sandia National Laboratories has used this conference to showcase and focus its communications and networking endeavors. At the 1994 conference, Sandia built a Switched Multimegabit Data Service (SMDS) network running at 44.736 megabits per second linking its private SMDS network between its facilities in Albuquerque, New Mexico and Livermore, California to the convention center in Washington, D.C. For the show, the network was also extended from Sandia, New Mexico to Los Alamos National Laboratory and from Sandia, California to Lawrence Livermore National Laboratory. This paper documents and describes this network and how it was used at the conference.

  5. Game user experience evaluation

    CERN Document Server

    Bernhaupt, Regina

    2015-01-01

    Evaluating interactive systems for their user experience (UX) is a standard approach in industry and research today. This book explores the areas of game design and development and Human Computer Interaction (HCI) as ways to understand the various contributing aspects of the overall gaming experience. Fully updated, extended and revised this book is based upon the original publication Evaluating User Experience in Games, and provides updated methods and approaches ranging from user- orientated methods to game specific approaches. New and emerging methods and areas explored include physiologi

  6. Safety for Users

    CERN Multimedia

    HR Department

    2008-01-01

    CERN welcomes more than 8000 Users every year. The PH Department as host to these scientific associates requires the highest safety standards. The PH Safety Office has published a Safety Flyer for Users. Important safety topics and procedures are presented. Although the Flyer is intended primarily to provide safety information for Users, the PH Safety Office invites all those on the CERN sites to keep a copy of the flyer as it gives guidance in matters of safety and explains what to do in the event of an emergency. Link: http://ph-dep.web.cern.ch/ph-dep/Safety/SafetyOffice.html PH-Safety Office PH Department

  7. Safety for Users

    CERN Multimedia

    HR Department

    2008-01-01

    CERN welcomes more than 8000 Users every year. The PH Department as host to these scientific associates requires the highest safety standards. The PH Safety Office has published a safety flyer for Users. Important safety topics and procedures are presented. Although the flyer is intended primarily to provide safety information for Users, the PH Safety Office invites all those on the CERN sites to keep a copy of the flyer as it gives guidance in matters of safety and explains what to do in the event of an emergency. The flyer is available at: http://ph-dep.web.cern.ch/ph-dep/Safety/SafetyOffice.html PH-Safety Office PH Department

  8. Designing for user engagement

    CERN Document Server

    Geisler, Cheryl

    2013-01-01

    Designing for User Engagement on the Web: 10 Basic Principles is concerned with making user experience engaging. The cascade of social web applications we are now familiar with - blogs, consumer reviews, wikis, and social networking - are all engaging experiences. But engagement is an increasingly common goal in business and productivity environments as well. This book provides a foundation for all those seeking to design engaging user experiences rich in communication and interaction. Combining a handbook on basic principles with case studies, it provides readers with a ric

  9. Using Vim as User Interface for Your Applications

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    The Vim editor offers one of the cleverest user interfaces. It's why many developers write programs with vi keyboard bindings. Now, imagine how powerful it gets to build applications literally on top of Vim itself.

  10. Learning power point 2000 easily

    Energy Technology Data Exchange (ETDEWEB)

    Mon, In Su; Je, Jung Suk

    2000-05-15

    This book introduces power point 2000, which gives descriptions what power point is, what we can do with power point 2000, is it possible to install power point 2000 in my computer? Let's run power point, basic of power point such as new presentation, writing letter, using text box, changing font size, color and shape, catching power user, insertion of word art and creating of new file. It also deals with figure, chart, graph, making multimedia file, presentation, know-how of power point for teachers and company workers.

  11. User interface development

    Science.gov (United States)

    Aggrawal, Bharat

    1994-01-01

    This viewgraph presentation describes the development of user interfaces for OS/2 versions of computer codes for the analysis of seals. Current status, new features, work in progress, and future plans are discussed.

  12. SEVERO code - user's manual

    International Nuclear Information System (INIS)

    Sacramento, A.M. do.

    1989-01-01

    This user's manual contains all the necessary information concerning the use of SEVERO code. This computer code is related to the statistics of extremes = extreme winds, extreme precipitation and flooding hazard risk analysis. (A.C.A.S.)

  13. EPA User Personas

    Science.gov (United States)

    Learn how EPA's three web user personas (Information Consumer, Information Intermediary, and Information Interpreter) can help you identify appropriate top audiences and top tasks for a topic or web area.

  14. Bevalac user's handbook

    International Nuclear Information System (INIS)

    1990-04-01

    This report is a users manual on the Bevalac accelerator facility. This paper discuses: general information; the Bevalac and its operation; major facilities and experimental areas; and experimental equipment

  15. VIERS- User Preference Service

    Data.gov (United States)

    Department of Veterans Affairs — The Preferences service provides a means to store, retrieve, and manage user preferences. The service supports definition of enterprise wide preferences, as well as...

  16. AVERT User Manual

    Science.gov (United States)

    AVERT is a flexible modeling framework with a simple user interface designed specifically to meet the needs of state air quality planners and other interested stakeholders. Use this guide to get started.

  17. EMAP Users Manual.

    Science.gov (United States)

    Kotz, Arnold; Redondo, Rory

    Presented is the user's manual for the Educational Manpower Information Sources Project (EMAP), an information file containing approximately 325 document abstracts related to the field of educational planning. (The EMAP file is described in document SP 006 747.) (JB)

  18. MINTEQ user's manual

    International Nuclear Information System (INIS)

    Peterson, S.R.; Hostetler, C.J.; Deutsch, W.J.; Cowan, C.E.

    1987-02-01

    This manual will aid the user in applying the MINTEQ geochemical computer code to model aqueous solutions and the interactions of aqueous solutions with hypothesized assemblages of solid phases. The manual will provide a basic understanding of how the MINTEQ computer code operates and the important principles that are incorporated into the code and instruct a user of the MINTEQ code on how to create input files to simulate a variety of geochemical problems. Chapters 2 through 8 are for the user who has some experience with or wishes to review the principles important to geochemical computer codes. These chapters include information on the methodology MINTEQ uses to incorporate these principles into the code. Chapters 9 through 11 are for the user who wants to know how to create input data files to model various types of problems. 35 refs., 2 figs., 5 tabs

  19. Industrial power distribution

    CERN Document Server

    Fehr, Ralph

    2016-01-01

    In this fully updated version of Industrial Power Distribution, the author addresses key areas of electric power distribution from an end-user perspective for both electrical engineers, as well as students who are training for a career in the electrical power engineering field. Industrial Power Distribution, Second Edition, begins by describing how industrial facilities are supplied from utility sources, which is supported with background information on the components of AC power, voltage drop calculations, and the sizing of conductors and transformers. Important concepts and discussions are featured throughout the book including those for sequence networks, ladder logic, motor application, fault calculations, and transformer connections. The book concludes with an introduction to power quality, how it affects industrial power systems, and an expansion of the concept of power factor, including a distortion term made necessary by the existence of harmonic.

  20. Metadata: A user`s view

    Energy Technology Data Exchange (ETDEWEB)

    Bretherton, F.P. [Univ. of Wisconsin, Madison, WI (United States); Singley, P.T. [Oak Ridge National Lab., TN (United States)

    1994-12-31

    An analysis is presented of the uses of metadata from four aspects of database operations: (1) search, query, retrieval, (2) ingest, quality control, processing, (3) application to application transfer; (4) storage, archive. Typical degrees of database functionality ranging from simple file retrieval to interdisciplinary global query with metadatabase-user dialog and involving many distributed autonomous databases, are ranked in approximate order of increasing sophistication of the required knowledge representation. An architecture is outlined for implementing such functionality in many different disciplinary domains utilizing a variety of off the shelf database management subsystems and processor software, each specialized to a different abstract data model.

  1. Green Power Partnership Top 30 Retail

    Science.gov (United States)

    EPA's Green Power Partnership is a voluntary program designed to reduce the environmental impact of electricity generation by promoting renewable energy. This list represents the largest green power users among retail partners within the GPP.

  2. Frame conditions for a well-functioning end user market

    International Nuclear Information System (INIS)

    Livik, Klaus

    1997-10-01

    The aim of this report is to describe and define different frame conditions being necessary for the development of a well-functioning end user market. The report describes the sharing of roles between end users, grid owners, suppliers, system operators and market operators in the power market, and it points out how the interplay between these roles should be arranged. A particular attention is laid on how to involve the end user relations into the five different market roles during the development of a more active end user market. Products and eventual potentials are described and discussed being based on estimates as well as load measurements. 17 figs., 4 tabs

  3. AutoCAD platform customization user interface and beyond

    CERN Document Server

    Ambrosius, Lee

    2014-01-01

    Make AutoCAD your own with powerful personalization options Options for AutoCAD customization are typically the domain of administrators, but savvy users can perform their own customizations to personalize AutoCAD. Until recently, most users never thought to customize the AutoCAD platform to meet their specific needs, instead leaving it to administrators. If you are an AutoCAD user who wants to ramp up personalization options in your favorite software, AutoCAD Platform Customization: User Interface and Beyond is the perfect resource for you. Author Lee Ambrosius is recognized as a leader in Au

  4. The Fermilab Advanced Computer Program multi-array processor system (ACPMAPS): A site oriented supercomputer for theoretical physics

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    The ACP Multi-Array Processor System (ACPMAPS) is a highly cost effective, local memory parallel computer designed for floating point intensive grid based problems. The processing nodes of the system are single board array processors based on the FORTRAN and C programmable Weitek XL chip set. The nodes are connected by a network of very high bandwidth 16 port crossbar switches. The architecture is designed to achieve the highest possible cost effectiveness while maintaining a high level of programmability. The primary application of the machine at Fermilab will be lattice gauge theory. The hardware is supported by a transparent site oriented software system called CANOPY which shields theorist users from the underlying node structure. 4 refs., 2 figs

  5. End User Development Toolkit for Developing Physical User Interface Applications

    OpenAIRE

    Abrahamsen, Daniel T; Palfi, Anders; Svendsen, Haakon Sønsteby

    2014-01-01

    BACKGROUND: Tangible user interfaces and end user development are two increasingresearch areas in software technology. Physical representation promoteopportunities to ease the use of technology and reinforce personality traits ascreativeness, collaboration and intuitive actions. However, designing tangibleuser interfaces are both cumbersome and require several layers of architecture.End user development allows users with no programming experience to createor customize their own applications. ...

  6. Evaluating User Participation and User Influence in an Enterprise System

    Science.gov (United States)

    Gibbs, Martin D.

    2010-01-01

    Does user influence have an impact on the data quality of an information systems development project? What decision making should users have? How can users effectively be engaged in the process? What is success? User participation is considered to be a critical success factor for Enterprise Resource Planning (ERP) projects, yet there is little…

  7. Akuna: An Open Source User Environment for Managing Subsurface Simulation Workflows

    Science.gov (United States)

    Freedman, V. L.; Agarwal, D.; Bensema, K.; Finsterle, S.; Gable, C. W.; Keating, E. H.; Krishnan, H.; Lansing, C.; Moeglein, W.; Pau, G. S. H.; Porter, E.; Scheibe, T. D.

    2014-12-01

    The U.S. Department of Energy (DOE) is investing in development of a numerical modeling toolset called ASCEM (Advanced Simulation Capability for Environmental Management) to support modeling analyses at legacy waste sites. ASCEM is an open source and modular computing framework that incorporates new advances and tools for predicting contaminant fate and transport in natural and engineered systems. The ASCEM toolset includes both a Platform with Integrated Toolsets (called Akuna) and a High-Performance Computing multi-process simulator (called Amanzi). The focus of this presentation is on Akuna, an open-source user environment that manages subsurface simulation workflows and associated data and metadata. In this presentation, key elements of Akuna are demonstrated, which includes toolsets for model setup, database management, sensitivity analysis, parameter estimation, uncertainty quantification, and visualization of both model setup and simulation results. A key component of the workflow is in the automated job launching and monitoring capabilities, which allow a user to submit and monitor simulation runs on high-performance, parallel computers. Visualization of large outputs can also be performed without moving data back to local resources. These capabilities make high-performance computing accessible to the users who might not be familiar with batch queue systems and usage protocols on different supercomputers and clusters.

  8. Autonomously managed electrical power systems

    Science.gov (United States)

    Callis, Charles P.

    1986-01-01

    The electric power systems for future spacecraft such as the Space Station will necessarily be more sophisticated and will exhibit more nearly autonomous operation than earlier spacecraft. These new power systems will be more reliable and flexible than their predecessors offering greater utility to the users. Automation approaches implemented on various power system breadboards are investigated. These breadboards include the Hubble Space Telescope power system test bed, the Common Module Power Management and Distribution system breadboard, the Autonomusly Managed Power System (AMPS) breadboard, and the 20 kilohertz power system breadboard. Particular attention is given to the AMPS breadboard. Future plans for these breadboards including the employment of artificial intelligence techniques are addressed.

  9. STAIRS User's Manual

    Energy Technology Data Exchange (ETDEWEB)

    Gadjokov, V; Dragulev, V; Gove, N; Schmid, H

    1976-10-15

    The STorage And Information Retrieval System (STAIRS) of IBM is described from the user's point of view. The description is based on the experimental use of STAIRS at the IAEA computer, with INIS and AGRIS data bases, from June 1975 to May 1976. Special attention is paid to what may be termed the hierarchical approach to retrieval in STAIRS. Such an approach allows for better use of the intrinsic data-base structure and, hence, contributes to higher recall and/or relevance ratios in retrieval. The functions carried out by STAIRS are explained and the communication language between the user and the system outlined. Details are given of the specific structure of the INIS and AGRIS data bases for STAIRS. The manual should enable an inexperienced user to start his first on-line dialogues by means of a CRT or teletype terminal. (author)

  10. STAIRS User's Manual

    International Nuclear Information System (INIS)

    Gadjokov, V.; Dragulev, V.; Gove, N.; Schmid, H.

    1976-10-01

    The STorage And Information Retrieval System (STAIRS) of IBM is described from the user's point of view. The description is based on the experimental use of STAIRS at the IAEA computer, with INIS and AGRIS data bases, from June 1975 to May 1976. Special attention is paid to what may be termed the hierarchical approach to retrieval in STAIRS. Such an approach allows for better use of the intrinsic data-base structure and, hence, contributes to higher recall and/or relevance ratios in retrieval. The functions carried out by STAIRS are explained and the communication language between the user and the system outlined. Details are given of the specific structure of the INIS and AGRIS data bases for STAIRS. The manual should enable an inexperienced user to start his first on-line dialogues by means of a CRT or teletype terminal. (author)

  11. End User Evaluations

    Science.gov (United States)

    Jay, Caroline; Lunn, Darren; Michailidou, Eleni

    As new technologies emerge, and Web sites become increasingly sophisticated, ensuring they remain accessible to disabled and small-screen users is a major challenge. While guidelines and automated evaluation tools are useful for informing some aspects of Web site design, numerous studies have demonstrated that they provide no guarantee that the site is genuinely accessible. The only reliable way to evaluate the accessibility of a site is to study the intended users interacting with it. This chapter outlines the processes that can be used throughout the design life cycle to ensure Web accessibility, describing their strengths and weaknesses, and discussing the practical and ethical considerations that they entail. The chapter also considers an important emerging trend in user evaluations: combining data from studies of “standard” Web use with data describing existing accessibility issues, to drive accessibility solutions forward.

  12. GRSAC Users Manual

    International Nuclear Information System (INIS)

    Ball, S.J.; Nypaver, D.J.

    1999-01-01

    An interactive workstation-based simulation code (GRSAC) for studying postulated severe accidents in gas-cooled reactors has been developed to accommodate user-generated input with ''smart front-end'' checking. Code features includes on- and off-line plotting, on-line help and documentation, and an automated sensitivity study option. The code and its predecessors have been validated using comparisons with a variety of experimental data and similar codes. GRSAC model features include a three-dimensional representation of the core thermal hydraulics, and optional ATWS (anticipated transients without scram) capabilities. The user manual includes a detailed description of the code features, and includes four case studies which guide the user through four different examples of the major uses of GRSAC: an accident case; an initial conditions setup and run; a sensitivity study; and the setup of a new reactor model

  13. GRSAC Users Manual

    Energy Technology Data Exchange (ETDEWEB)

    Ball, S.J.; Nypaver, D.J.

    1999-02-01

    An interactive workstation-based simulation code (GRSAC) for studying postulated severe accidents in gas-cooled reactors has been developed to accommodate user-generated input with ''smart front-end'' checking. Code features includes on- and off-line plotting, on-line help and documentation, and an automated sensitivity study option. The code and its predecessors have been validated using comparisons with a variety of experimental data and similar codes. GRSAC model features include a three-dimensional representation of the core thermal hydraulics, and optional ATWS (anticipated transients without scram) capabilities. The user manual includes a detailed description of the code features, and includes four case studies which guide the user through four different examples of the major uses of GRSAC: an accident case; an initial conditions setup and run; a sensitivity study; and the setup of a new reactor model.

  14. HANARO user support

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jeong Soo; Kim, Y. J.; Seong B.S. [and others

    2003-06-01

    The purpose of this project is to support external user for the promotion of HANARO common utilization effectively. To do this, external manpower was recruited and trained. Also, in order to find out and cultivate HANARO user, practice-oriented education was done. The total number of project selected as the promotion of HANARO common utilization was 31 in this year. These composed of four fields such as neutron beam utilization, materials/nuclear materials irradiation test, neutron activation analysis and radioisotope production. In each field, the numbers of project were 17, 7, 4 and 3 respectively. At first, from a selected project of view, supporting ratio by external manpower was reached to the 58%, that is, 18 out of 31 project was supported. In each field, it was 82% for neutron beam utilization and 100% for neutron activation analysis. Also, from the utilization time point of view, supporting ratio of external manpower was reached to 30% for neutron beam utilization and 59% for neutron activation analysis. Otherwise, supporting ratio by manpower in KAERI was reached to 97%, that is, 30 out of 31 project was supported. Also, from the utilization time point of view, total supporting ratio was reached to 15%. In each field, it was 20% for neutron beam utilization, 18% for materials/nuclear materials irradiation test, 20% for neutron activation analysis and 6% for radioisotope production. In order to contribute finding and cultivating of HANARO potential user and increase utilization ratio of HANARO experimental facility, practice-oriented HANARO user education has been done. At first, 32 participants from industries, universities, institutes were educated and practiced on HRPD/SANS instrument in the field of neutron beam utilization. Otherwise, in order to support external user effectively, external manpower were trained. Also, more effective support for external user could be possible through the grasping difficulty and problem on the performance of project

  15. HANARO user support

    International Nuclear Information System (INIS)

    Lee, Jeong Soo; Kim, Y. J.; Seong B.S.

    2003-06-01

    The purpose of this project is to support external user for the promotion of HANARO common utilization effectively. To do this, external manpower was recruited and trained. Also, in order to find out and cultivate HANARO user, practice-oriented education was done. The total number of project selected as the promotion of HANARO common utilization was 31 in this year. These composed of four fields such as neutron beam utilization, materials/nuclear materials irradiation test, neutron activation analysis and radioisotope production. In each field, the numbers of project were 17, 7, 4 and 3 respectively. At first, from a selected project of view, supporting ratio by external manpower was reached to the 58%, that is, 18 out of 31 project was supported. In each field, it was 82% for neutron beam utilization and 100% for neutron activation analysis. Also, from the utilization time point of view, supporting ratio of external manpower was reached to 30% for neutron beam utilization and 59% for neutron activation analysis. Otherwise, supporting ratio by manpower in KAERI was reached to 97%, that is, 30 out of 31 project was supported. Also, from the utilization time point of view, total supporting ratio was reached to 15%. In each field, it was 20% for neutron beam utilization, 18% for materials/nuclear materials irradiation test, 20% for neutron activation analysis and 6% for radioisotope production. In order to contribute finding and cultivating of HANARO potential user and increase utilization ratio of HANARO experimental facility, practice-oriented HANARO user education has been done. At first, 32 participants from industries, universities, institutes were educated and practiced on HRPD/SANS instrument in the field of neutron beam utilization. Otherwise, in order to support external user effectively, external manpower were trained. Also, more effective support for external user could be possible through the grasping difficulty and problem on the performance of project

  16. Natural User Interfaces

    OpenAIRE

    Câmara , António

    2011-01-01

    Dissertação de Mestrado em Engenharia Informática apresentada à Faculdade de Ciências e Tecnologia da Universidade de Coimbra This project’s main subject are Natural User Interfaces. These interfaces’ main purpose is to allow the user to interact with computer systems in a more direct and natural way. The popularization of touch and gesture devices in the last few years has allowed for them to become increasingly common and today we are experiencing a transition of interface p...

  17. User friendly packaging

    DEFF Research Database (Denmark)

    Geert Jensen, Birgitte

    2010-01-01

    Most consumers have experienced occasional problems with opening packaging. Tomato sauce from the tinned mackerel splattered all over the kitchen counter, the unrelenting pickle jar lid, and the package of sliced ham that cannot be opened without a knife or a pair of scissors. The research project...... “User-friendly Packaging” aims to create a platform for developing more user-friendly packaging. One intended outcome of the project is a guideline that industry can use in development efforts. The project also points the way for more extended collaboration between companies and design researchers. How...... can design research help industry in packaging innovation?...

  18. IMAGE User Manual

    Energy Technology Data Exchange (ETDEWEB)

    Stehfest, E; De Waal, L; Oostenrijk, R.

    2010-09-15

    This user manual contains the basic information for running the simulation model IMAGE ('Integrated Model to Assess the Global Environment') of PBL. The motivation for this report was a substantial restructuring of the source code for IMAGE version 2.5. The document gives concise content information about the submodels, tells the user how to install the program, describes the directory structure of the run environment, shows how scenarios have to be prepared and run, and gives insight in the restart functionality.

  19. Who are your users?

    DEFF Research Database (Denmark)

    Nielsen, Lene; Salminen, joni; Jung, Soon-Gyo

    2017-01-01

    One of the reasons for using personas is to align user understandings across project teams and sites. As part of a larger persona study, at Al Jazeera English (AJE), we conducted 16 qualitative interviews with media producers, the end users of persona descriptions. We asked the participants about...... their understanding of a typical AJE media consumer, and the variety of answers shows that the understandings are not aligned and are built on a mix of own experiences, own self, assumptions, and data given by the company. The answers are sometimes aligned with the data-driven personas and sometimes not. The end...

  20. EPRINT ARCHIVE USER SURVEY

    CERN Multimedia

    2001-01-01

    University of Southampton invites the CERN community to participate in a survey Professor Stevan Harnad is conducting on current users and non-users of Eprint Archives. http://www.eprints.org/survey/ The findings will be used to suggest potential enhancements of the services as well as to get a deeper understanding of the very rapid developments in the on-line dissemination and use of scientific and scholarly research. (The survey is anonymous. Revealing your identity is optional and it will be kept confidential.)