WorldWideScience

Sample records for high-performance computational grids

  1. How to build a high-performance compute cluster for the Grid

    CERN Document Server

    Reinefeld, A

    2001-01-01

    The success of large-scale multi-national projects like the forthcoming analysis of the LHC particle collision data at CERN relies to a great extent on the ability to efficiently utilize computing and data-storage resources at geographically distributed sites. Currently, much effort is spent on the design of Grid management software (Datagrid, Globus, etc.), while the effective integration of computing nodes has been largely neglected up to now. This is the focus of our work. We present a framework for a high- performance cluster that can be used as a reliable computing node in the Grid. We outline the cluster architecture, the management of distributed data and the seamless integration of the cluster into the Grid environment. (11 refs).

  2. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    Science.gov (United States)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3

  3. Development of a High Performance Spacer Grid

    Energy Technology Data Exchange (ETDEWEB)

    Song, Kee Nam; Song, K. N.; Yoon, K. H. (and others)

    2007-03-15

    A spacer grid in a LWR fuel assembly is a key structural component to support fuel rods and to enhance the heat transfer from the fuel rod to the coolant. In this research, the main research items are the development of inherent and high performance spacer grid shapes, the establishment of mechanical/structural analysis and test technology, and the set-up of basic test facilities for the spacer grid. The main research areas and results are as follows. 1. 18 different spacer grid candidates have been invented and applied for domestic and US patents. Among the candidates 16 are chosen from the patent. 2. Two kinds of spacer grids are finally selected for the advanced LWR fuel after detailed performance tests on the candidates and commercial spacer grids from a mechanical/structural point of view. According to the test results the features of the selected spacer grids are better than those of the commercial spacer grids. 3. Four kinds of basic test facilities are set up and the relevant test technologies are established. 4. Mechanical/structural analysis models and technology for spacer grid performance are developed and the analysis results are compared with the test results to enhance the reliability of the models.

  4. High Performance Computing Multicast

    Science.gov (United States)

    2012-02-01

    A History of the Virtual Synchrony Replication Model,” in Replication: Theory and Practice, Charron-Bost, B., Pedone, F., and Schiper, A. (Eds...Performance Computing IP / IPv4 Internet Protocol (version 4.0) IPMC Internet Protocol MultiCast LAN Local Area Network MCMD Dr. Multicast MPI

  5. Grid Computing

    Indian Academy of Sciences (India)

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers ...

  6. ClustalXeed: a GUI-based grid computation version for high performance and terabyte size multiple sequence alignment

    Directory of Open Access Journals (Sweden)

    Kim Taeho

    2010-09-01

    Full Text Available Abstract Background There is an increasing demand to assemble and align large-scale biological sequence data sets. The commonly used multiple sequence alignment programs are still limited in their ability to handle very large amounts of sequences because the system lacks a scalable high-performance computing (HPC environment with a greatly extended data storage capacity. Results We designed ClustalXeed, a software system for multiple sequence alignment with incremental improvements over previous versions of the ClustalX and ClustalW-MPI software. The primary advantage of ClustalXeed over other multiple sequence alignment software is its ability to align a large family of protein or nucleic acid sequences. To solve the conventional memory-dependency problem, ClustalXeed uses both physical random access memory (RAM and a distributed file-allocation system for distance matrix construction and pair-align computation. The computation efficiency of disk-storage system was markedly improved by implementing an efficient load-balancing algorithm, called "idle node-seeking task algorithm" (INSTA. The new editing option and the graphical user interface (GUI provide ready access to a parallel-computing environment for users who seek fast and easy alignment of large DNA and protein sequence sets. Conclusions ClustalXeed can now compute a large volume of biological sequence data sets, which were not tractable in any other parallel or single MSA program. The main developments include: 1 the ability to tackle larger sequence alignment problems than possible with previous systems through markedly improved storage-handling capabilities. 2 Implementing an efficient task load-balancing algorithm, INSTA, which improves overall processing times for multiple sequence alignment with input sequences of non-uniform length. 3 Support for both single PC and distributed cluster systems.

  7. Grid Computing

    Indian Academy of Sciences (India)

    IAS Admin

    emergence of supercomputers led to the use of computer simula- tion as an .... Scientific and engineering applications (e.g., Tera grid secure gate way). Collaborative ... Encryption, privacy, protection from malicious software. Physical Layer.

  8. High performance GPU processing for inversion using uniform grid searches

    Science.gov (United States)

    Venetis, Ioannis E.; Saltogianni, Vasso; Stiros, Stathis; Gallopoulos, Efstratios

    2017-04-01

    Many geophysical problems are described by systems of redundant, highly non-linear systems of ordinary equations with constant terms deriving from measurements and hence representing stochastic variables. Solution (inversion) of such problems is based on numerical, optimization methods, based on Monte Carlo sampling or on exhaustive searches in cases of two or even three "free" unknown variables. Recently the TOPological INVersion (TOPINV) algorithm, a grid search-based technique in the Rn space, has been proposed. TOPINV is not based on the minimization of a certain cost function and involves only forward computations, hence avoiding computational errors. The basic concept is to transform observation equations into inequalities on the basis of an optimization parameter k and of their standard errors, and through repeated "scans" of n-dimensional search grids for decreasing values of k to identify the optimal clusters of gridpoints which satisfy observation inequalities and by definition contain the "true" solution. Stochastic optimal solutions and their variance-covariance matrices are then computed as first and second statistical moments. Such exhaustive uniform searches produce an excessive computational load and are extremely time consuming for common computers based on a CPU. An alternative is to use a computing platform based on a GPU, which nowadays is affordable to the research community, which provides a much higher computing performance. Using the CUDA programming language to implement TOPINV allows the investigation of the attained speedup in execution time on such a high performance platform. Based on synthetic data we compared the execution time required for two typical geophysical problems, modeling magma sources and seismic faults, described with up to 18 unknown variables, on both CPU/FORTRAN and GPU/CUDA platforms. The same problems for several different sizes of search grids (up to 1012 gridpoints) and numbers of unknown variables were solved on

  9. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  10. High-Performance Secure Database Access Technologies for HEP Grids

    Energy Technology Data Exchange (ETDEWEB)

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the

  11. High-Performance Secure Database Access Technologies for HEP Grids

    International Nuclear Information System (INIS)

    Vranicar, Matthew; Weicher, John

    2006-01-01

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist's computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that 'Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications'. There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure

  12. High Performance Spaceflight Computing (HPSC)

    Data.gov (United States)

    National Aeronautics and Space Administration — Space-based computing has not kept up with the needs of current and future NASA missions. We are developing a next-generation flight computing system that addresses...

  13. High-Performance Computing Paradigm and Infrastructure

    CERN Document Server

    Yang, Laurence T

    2006-01-01

    With hyperthreading in Intel processors, hypertransport links in next generation AMD processors, multi-core silicon in today's high-end microprocessors from IBM and emerging grid computing, parallel and distributed computers have moved into the mainstream

  14. [Grid computing

    CERN Multimedia

    Wolinsky, H

    2003-01-01

    "Turn on a water spigot, and it's like tapping a bottomless barrel of water. Ditto for electricity: Flip the switch, and the supply is endless. But computing is another matter. Even with the Internet revolution enabling us to connect in new ways, we are still limited to self-contained systems running locally stored software, limited by corporate, institutional and geographic boundaries" (1 page).

  15. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  16. High Performance Parallel Multigrid Algorithms for Unstructured Grids

    Science.gov (United States)

    Frederickson, Paul O.

    1996-01-01

    We describe a high performance parallel multigrid algorithm for a rather general class of unstructured grid problems in two and three dimensions. The algorithm PUMG, for parallel unstructured multigrid, is related in structure to the parallel multigrid algorithm PSMG introduced by McBryan and Frederickson, for they both obtain a higher convergence rate through the use of multiple coarse grids. Another reason for the high convergence rate of PUMG is its smoother, an approximate inverse developed by Baumgardner and Frederickson.

  17. Development of structural technology for a high performance spacer grid

    International Nuclear Information System (INIS)

    Song, Kee Nam; Kim, H. K.; Kang, H. S.

    2003-03-01

    A spacer grid in a LWR fuel assembly is a key structural component to support fuel rods and to enhance the heat transfer from the fuel rod to the coolant. In this research, the main research items are the development of inherent and high performance spacer grid shapes, the establishment of mechanical/structural analysis and test technology, and the set-up of basic test facilities for the spacer grid. The main research areas and results are as follows. 1. 14 different spacer grid candidates have been invented and applied for domestic and US patents. Among the candidates six are chosen from the patent. 2. Two kinds of spacer grids are finally selected for the advanced LWR fuel after detailed performance tests on the candidates and commercial spacer grids from a mechanical/structural point of view. According to the test results the features of the selected spacer grids are better than those of the commercial spacer grids. 3. Four kinds of basic test facilities are set up and the relevant test technologies are established. 4. Mechanical/structural analysis models and technology for spacer grid performance are developed and the analysis results are compared with the test results to enhance the reliability of the models

  18. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  19. Trends in life science grid: from computing grid to knowledge grid

    Directory of Open Access Journals (Sweden)

    Konagaya Akihiko

    2006-12-01

    Full Text Available Abstract Background Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. Results This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Conclusion Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  20. High-performance computing — an overview

    Science.gov (United States)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  1. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  2. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  3. Parallel high-performance grid computing: Capabilities and opportunities of a novel demanding service and business class allowing highest resource efficiency

    NARCIS (Netherlands)

    F.N. Kepper (Nick); R. Ettig (Ramona); F. Dickmann (Frank); R. Stehr (Rene); F.G. Grosveld (Frank); G. Wedemann (Gero); T.A. Knoch (Tobias)

    2010-01-01

    textabstractEspecially in the life-science and the health-care sectors the huge IT requirements are imminent due to the large and complex systems to be analysed and simulated. Grid infrastructures play here a rapidly increasing role for research, diagnostics, and treatment, since they provide the

  4. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  5. AHPCRC - Army High Performance Computing Research Center

    Science.gov (United States)

    2010-01-01

    computing. Of particular interest is the ability of a distrib- uted jamming network (DJN) to jam signals in all or part of a sensor or communications net...and reasoning, assistive technologies. FRIEDRICH (FRITZ) PRINZ Finmeccanica Professor of Engineering, Robert Bosch Chair, Department of Engineering...High Performance Computing Research Center www.ahpcrc.org BARBARA BRYAN AHPCRC Research and Outreach Manager, HPTi (650) 604-3732 bbryan@hpti.com Ms

  6. DURIP: High Performance Computing in Biomathematics Applications

    Science.gov (United States)

    2017-05-10

    Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied

  7. LHC computing grid

    International Nuclear Information System (INIS)

    Novaes, Sergio

    2011-01-01

    Full text: We give an overview of the grid computing initiatives in the Americas. High-Energy Physics has played a very important role in the development of grid computing in the world and in Latin America it has not been different. Lately, the grid concept has expanded its reach across all branches of e-Science, and we have witnessed the birth of the first nationwide infrastructures and its use in the private sector. (author)

  8. Monitoring SLAC High Performance UNIX Computing Systems

    International Nuclear Information System (INIS)

    Lettsome, Annette K.

    2005-01-01

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface

  9. High-performance computing for airborne applications

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  10. High Performance Computing Operations Review Report

    Energy Technology Data Exchange (ETDEWEB)

    Cupps, Kimberly C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  11. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  12. Desktop grid computing

    CERN Document Server

    Cerin, Christophe

    2012-01-01

    Desktop Grid Computing presents common techniques used in numerous models, algorithms, and tools developed during the last decade to implement desktop grid computing. These techniques enable the solution of many important sub-problems for middleware design, including scheduling, data management, security, load balancing, result certification, and fault tolerance. The book's first part covers the initial ideas and basic concepts of desktop grid computing. The second part explores challenging current and future problems. Each chapter presents the sub-problems, discusses theoretical and practical

  13. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  14. High performance parallel computers for science

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  15. High performance computing in linear control

    International Nuclear Information System (INIS)

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  16. The path toward HEP High Performance Computing

    International Nuclear Information System (INIS)

    Apostolakis, John; Brun, René; Gheata, Andrei; Wenzel, Sandro; Carminati, Federico

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit

  17. A High Performance COTS Based Computer Architecture

    Science.gov (United States)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  18. The path toward HEP High Performance Computing

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  19. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  20. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  1. High performance workflow implementation for protein surface characterization using grid technology

    Directory of Open Access Journals (Sweden)

    Clematis Andrea

    2005-12-01

    Full Text Available Abstract Background This study concerns the development of a high performance workflow that, using grid technology, correlates different kinds of Bioinformatics data, starting from the base pairs of the nucleotide sequence to the exposed residues of the protein surface. The implementation of this workflow is based on the Italian Grid.it project infrastructure, that is a network of several computational resources and storage facilities distributed at different grid sites. Methods Workflows are very common in Bioinformatics because they allow to process large quantities of data by delegating the management of resources to the information streaming. Grid technology optimizes the computational load during the different workflow steps, dividing the more expensive tasks into a set of small jobs. Results Grid technology allows efficient database management, a crucial problem for obtaining good results in Bioinformatics applications. The proposed workflow is implemented to integrate huge amounts of data and the results themselves must be stored into a relational database, which results as the added value to the global knowledge. Conclusion A web interface has been developed to make this technology accessible to grid users. Once the workflow has started, by means of the simplified interface, it is possible to follow all the different steps throughout the data processing. Eventually, when the workflow has been terminated, the different features of the protein, like the amino acids exposed on the protein surface, can be compared with the data present in the output database.

  2. High-performance computing in accelerating structure design and analysis

    International Nuclear Information System (INIS)

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  3. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  4. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  5. High performance computing in power and energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Khaitan, Siddhartha Kumar [Iowa State Univ., Ames, IA (United States); Gupta, Anshul (eds.) [IBM Watson Research Center, Yorktown Heights, NY (United States)

    2013-07-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, cascading and security analysis, interaction between hybrid systems (electric, transport, gas, oil, coal, etc.) and so on, to get meaningful information in real time to ensure a secure, reliable and stable power system grid. Advanced research on development and implementation of market-ready leading-edge high-speed enabling technologies and algorithms for solving real-time, dynamic, resource-critical problems will be required for dynamic security analysis targeted towards successful implementation of Smart Grid initiatives. This books aims to bring together some of the latest research developments as well as thoughts on the future research directions of the high performance computing applications in electric power systems planning, operations, security, markets, and grid integration of alternate sources of energy, etc.

  6. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  7. Resource allocation in grid computing

    NARCIS (Netherlands)

    Koole, Ger; Righter, Rhonda

    2007-01-01

    Grid computing, in which a network of computers is integrated to create a very fast virtual computer, is becoming ever more prevalent. Examples include the TeraGrid and Planet-lab.org, as well as applications on the existing Internet that take advantage of unused computing and storage capacity of

  8. DOE research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  9. Peregrine System | High-Performance Computing | NREL

    Science.gov (United States)

    classes of nodes that users access: Login Nodes Peregrine has four login nodes, each of which has Intel E5 /scratch file systems, the /mss file system is mounted on all login nodes. Compute Nodes Peregrine has 2592

  10. What Physicists Should Know About High Performance Computing - Circa 2002

    Science.gov (United States)

    Frederick, Donald

    2002-08-01

    High Performance Computing (HPC) is a dynamic, cross-disciplinary field that traditionally has involved applied mathematicians, computer scientists, and others primarily from the various disciplines that have been major users of HPC resources - physics, chemistry, engineering, with increasing use by those in the life sciences. There is a technological dynamic that is powered by economic as well as by technical innovations and developments. This talk will discuss practical ideas to be considered when developing numerical applications for research purposes. Even with the rapid pace of development in the field, the author believes that these concepts will not become obsolete for a while, and will be of use to scientists who either are considering, or who have already started down the HPC path. These principles will be applied in particular to current parallel HPC systems, but there will also be references of value to desktop users. The talk will cover such topics as: computing hardware basics, single-cpu optimization, compilers, timing, numerical libraries, debugging and profiling tools and the emergence of Computational Grids.

  11. Recent trends in grid computing

    International Nuclear Information System (INIS)

    Miura, Kenichi

    2004-01-01

    Grid computing is a technology which allows uniform and transparent access to geographically dispersed computational resources, such as computers, databases, experimental and observational equipment etc. via high-speed, high-bandwidth networking. The commonly used analogy is that of electrical power grid, whereby the household electricity is made available from outlets on the wall, and little thought need to be given to where the electricity is generated and how it is transmitted. The usage of grid also includes distributed parallel computing, high through-put computing, data intensive computing (data grid) and collaborative computing. This paper reviews the historical background, software structure, current status and on-going grid projects, including applications of grid technology to nuclear fusion research. (author)

  12. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  13. Inclusive vision for high performance computing at the CSIR

    CSIR Research Space (South Africa)

    Gazendam, A

    2006-02-01

    Full Text Available and computationally intensive applications. A number of different technologies and standards were identified as core to the open and distributed high-performance infrastructure envisaged...

  14. High performance computations using dynamical nucleation theory

    International Nuclear Information System (INIS)

    Windus, T L; Crosby, L D; Kathmann, S M

    2008-01-01

    Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, we describe the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A 'master-slave' solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are described

  15. RISC Processors and High Performance Computing

    Science.gov (United States)

    Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.

  16. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  17. NINJA: Java for High Performance Numerical Computing

    Directory of Open Access Journals (Sweden)

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  18. GRID computing for experimental high energy physics

    International Nuclear Information System (INIS)

    Moloney, G.R.; Martin, L.; Seviour, E.; Taylor, G.N.; Moorhead, G.F.

    2002-01-01

    Full text: The Large Hadron Collider (LHC), to be completed at the CERN laboratory in 2006, will generate 11 petabytes of data per year. The processing of this large data stream requires a large, distributed computing infrastructure. A recent innovation in high performance distributed computing, the GRID, has been identified as an important tool in data analysis for the LHC. GRID computing has actual and potential application in many fields which require computationally intensive analysis of large, shared data sets. The Australian experimental High Energy Physics community has formed partnerships with the High Performance Computing community to establish a GRID node at the University of Melbourne. Through Australian membership of the ATLAS experiment at the LHC, Australian researchers have an opportunity to be involved in the European DataGRID project. This presentation will include an introduction to the GRID, and it's application to experimental High Energy Physics. We will present the results of our studies, including participation in the first LHC data challenge

  19. Real-time Tsunami Inundation Prediction Using High Performance Computers

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the

  20. High-performance parallel approaches for three-dimensional light detection and ranging point clouds gridding

    Science.gov (United States)

    Rizki, Permata Nur Miftahur; Lee, Heezin; Lee, Minsu; Oh, Sangyoon

    2017-01-01

    With the rapid advance of remote sensing technology, the amount of three-dimensional point-cloud data has increased extraordinarily, requiring faster processing in the construction of digital elevation models. There have been several attempts to accelerate the computation using parallel methods; however, little attention has been given to investigating different approaches for selecting the most suited parallel programming model for a given computing environment. We present our findings and insights identified by implementing three popular high-performance parallel approaches (message passing interface, MapReduce, and GPGPU) on time demanding but accurate kriging interpolation. The performances of the approaches are compared by varying the size of the grid and input data. In our empirical experiment, we demonstrate the significant acceleration by all three approaches compared to a C-implemented sequential-processing method. In addition, we also discuss the pros and cons of each method in terms of usability, complexity infrastructure, and platform limitation to give readers a better understanding of utilizing those parallel approaches for gridding purposes.

  1. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    Science.gov (United States)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  2. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  3. CMS computing on grid

    International Nuclear Information System (INIS)

    Guan Wen; Sun Gongxing

    2007-01-01

    CMS has adopted a distributed system of services which implement CMS application view on top of Grid services. An overview of CMS services will be covered. Emphasis is on CMS data management and workload Management. (authors)

  4. Grid Computing Education Support

    Energy Technology Data Exchange (ETDEWEB)

    Steven Crumb

    2008-01-15

    The GGF Student Scholar program enabled GGF the opportunity to bring over sixty qualified graduate and under-graduate students with interests in grid technologies to its three annual events over the three-year program.

  5. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa; Parashar, Manish; Kim, Hyunjoo; Jordan, Kirk E.; Sachdeva, Vipin; Sexton, James; Jamjoom, Hani; Shae, Zon-Yin; Pencheva, Gergina; Tavakoli, Reza; Wheeler, Mary F.

    2012-01-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a

  6. High Performance Computing Modernization Program Kerberos Throughput Test Report

    Science.gov (United States)

    2017-10-26

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/5524--17-9751 High Performance Computing Modernization Program Kerberos Throughput Test ...NUMBER 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 2. REPORT TYPE1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 6. AUTHOR(S) 8. PERFORMING...PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT High Performance Computing Modernization Program Kerberos Throughput Test Report Daniel G. Gdula* and

  7. A high performance scientific cloud computing environment for materials simulations

    OpenAIRE

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  8. Quantum Accelerators for High-Performance Computing Systems

    OpenAIRE

    Britt, Keith A.; Mohiyaddin, Fahd A.; Humble, Travis S.

    2017-01-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantu...

  9. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  10. High Performance Computing Software Applications for Space Situational Awareness

    Science.gov (United States)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  11. Grid computing the European Data Grid Project

    CERN Document Server

    Segal, B; Gagliardi, F; Carminati, F

    2000-01-01

    The goal of this project is the development of a novel environment to support globally distributed scientific exploration involving multi- PetaByte datasets. The project will devise and develop middleware solutions and testbeds capable of scaling to handle many PetaBytes of distributed data, tens of thousands of resources (processors, disks, etc.), and thousands of simultaneous users. The scale of the problem and the distribution of the resources and user community preclude straightforward replication of the data at different sites, while the aim of providing a general purpose application environment precludes distributing the data using static policies. We will construct this environment by combining and extending newly emerging "Grid" technologies to manage large distributed datasets in addition to computational elements. A consequence of this project will be the emergence of fundamental new modes of scientific exploration, as access to fundamental scientific data is no longer constrained to the producer of...

  12. Micromagnetics on high-performance workstation and mobile computational platforms

    Science.gov (United States)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  13. High performance computing and communications: FY 1997 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  14. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  15. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  16. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  17. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  18. High performance computing system in the framework of the Higgs boson studies

    CERN Document Server

    Belyaev, Nikita; The ATLAS collaboration

    2017-01-01

    The Higgs boson physics is one of the most important and promising fields of study in modern High Energy Physics. To perform precision measurements of the Higgs boson properties, the use of fast and efficient instruments of Monte Carlo event simulation is required. Due to the increasing amount of data and to the growing complexity of the simulation software tools, the computing resources currently available for Monte Carlo simulation on the LHC GRID are not sufficient. One of the possibilities to address this shortfall of computing resources is the usage of institutes computer clusters, commercial computing resources and supercomputers. In this paper, a brief description of the Higgs boson physics, the Monte-Carlo generation and event simulation techniques are presented. A description of modern high performance computing systems and tests of their performance are also discussed. These studies have been performed on the Worldwide LHC Computing Grid and Kurchatov Institute Data Processing Center, including Tier...

  19. Integrated State Estimation and Contingency Analysis Software Implementation using High Performance Computing Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yousu; Glaesemann, Kurt R.; Rice, Mark J.; Huang, Zhenyu

    2015-12-31

    Power system simulation tools are traditionally developed in sequential mode and codes are optimized for single core computing only. However, the increasing complexity in the power grid models requires more intensive computation. The traditional simulation tools will soon not be able to meet the grid operation requirements. Therefore, power system simulation tools need to evolve accordingly to provide faster and better results for grid operations. This paper presents an integrated state estimation and contingency analysis software implementation using high performance computing techniques. The software is able to solve large size state estimation problems within one second and achieve a near-linear speedup of 9,800 with 10,000 cores for contingency analysis application. The performance evaluation is presented to show its effectiveness.

  20. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  1. High-performance scientific computing in the cloud

    Science.gov (United States)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  2. GPU-based high-performance computing for radiation therapy

    International Nuclear Information System (INIS)

    Jia, Xun; Jiang, Steve B; Ziegenhein, Peter

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. (topical review)

  3. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa

    2012-10-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a representative HPC application. © 2012 IEEE.

  4. Computer science of the high performance; Informatica del alto rendimiento

    Energy Technology Data Exchange (ETDEWEB)

    Moraleda, A.

    2008-07-01

    The high performance computing is taking shape as a powerful accelerator of the process of innovation, to drastically reduce the waiting times for access to the results and the findings in a growing number of processes and activities as complex and important as medicine, genetics, pharmacology, environment, natural resources management or the simulation of complex processes in a wide variety of industries. (Author)

  5. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2015-01-01

    A continuation of Contemporary High Performance Computing: From Petascale toward Exascale, this second volume continues the discussion of HPC flagship systems, major application workloads, facilities, and sponsors. The book includes of figures and pictures that capture the state of existing systems: pictures of buildings, systems in production, floorplans, and many block diagrams and charts to illustrate system design and performance.

  6. Enabling high performance computational science through combinatorial algorithms

    International Nuclear Information System (INIS)

    Boman, Erik G; Bozdag, Doruk; Catalyurek, Umit V; Devine, Karen D; Gebremedhin, Assefaw H; Hovland, Paul D; Pothen, Alex; Strout, Michelle Mills

    2007-01-01

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation

  7. Enabling high performance computational science through combinatorial algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Bozdag, Doruk [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Catalyurek, Umit V [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Devine, Karen D [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw H [Computer Science and Center for Computational Science, Old Dominion University (United States); Hovland, Paul D [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science and Center for Computational Science, Old Dominion University (United States); Strout, Michelle Mills [Computer Science, Colorado State University (United States)

    2007-07-15

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation.

  8. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  9. A high performance scientific cloud computing environment for materials simulations

    Science.gov (United States)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  10. High performance computing and communications: FY 1996 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  11. Incremental Trust in Grid Computing

    DEFF Research Database (Denmark)

    Brinkløv, Michael Hvalsøe; Sharp, Robin

    2007-01-01

    This paper describes a comparative simulation study of some incremental trust and reputation algorithms for handling behavioural trust in large distributed systems. Two types of reputation algorithm (based on discrete and Bayesian evaluation of ratings) and two ways of combining direct trust and ...... of Grid computing systems....

  12. Cactus and Visapult: An ultra-high performance grid-distributedvisualization architecture using connectionless protocols

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Shalf, John

    2002-08-31

    This past decade has seen rapid growth in the size,resolution, and complexity of Grand Challenge simulation codes. Thistrend is accompanied by a trend towards multinational, multidisciplinaryteams who carry out this research in distributed teams, and thecorresponding growth of Grid infrastructure to support these widelydistributed Virtual Organizations. As the number and diversity ofdistributed teams grow, the need for visualization tools to analyze anddisplay multi-terabyte, remote data becomes more pronounced and moreurgent. One such tool that has been successfully used to address thisproblem is Visapult. Visapult is a parallel visualization tool thatemploys Grid-distributed components, latency tolerant visualization andgraphics algorithms, along with high performance network I/O in order toachieve effective remote analysis of massive datasets. In this paper wediscuss improvements to network bandwidth utilization and responsivenessof the Visapult application that result from using connectionlessprotocols to move data payload between the distributed Visapultcomponents and a Grid-enabled, high performance physics simulation usedto study gravitational waveforms of colliding black holes: The Cactuscode. These improvements have boosted Visapult's network efficiency to88-96 percent of the maximum theoretical available bandwidth onmulti-gigabit Wide Area Networks, and greatly enhanced interactivity.Such improvements are critically important for future development ofeffective interactive Grid applications.

  13. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  14. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  15. 6th International Conference on High Performance Scientific Computing

    CERN Document Server

    Phu, Hoang; Rannacher, Rolf; Schlöder, Johannes

    2017-01-01

    This proceedings volume highlights a selection of papers presented at the Sixth International Conference on High Performance Scientific Computing, which took place in Hanoi, Vietnam on March 16-20, 2015. The conference was jointly organized by the Heidelberg Institute of Theoretical Studies (HITS), the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University, and the Vietnam Institute for Advanced Study in Mathematics, Ministry of Education The contributions cover a broad, interdisciplinary spectrum of scientific computing and showcase recent advances in theory, methods, and practical applications. Subjects covered numerical simulation, methods for optimization and control, parallel computing, and software development, as well as the applications of scientific computing in physics, mechanics, biomechanics and robotics, material science, hydrology, biotechnology, medicine, transport, scheduling, and in...

  16. Unravelling the structure of matter on high-performance computers

    International Nuclear Information System (INIS)

    Kieu, T.D.; McKellar, B.H.J.

    1992-11-01

    The various phenomena and the different forms of matter in nature are believed to be the manifestation of only a handful set of fundamental building blocks-the elementary particles-which interact through the four fundamental forces. In the study of the structure of matter at this level one has to consider forces which are not sufficiently weak to be treated as small perturbations to the system, an example of which is the strong force that binds the nucleons together. High-performance computers, both vector and parallel machines, have facilitated the necessary non-perturbative treatments. The principles and the techniques of computer simulations applied to Quantum Chromodynamics are explained examples include the strong interactions, the calculation of the mass of nucleons and their decay rates. Some commercial and special-purpose high-performance machines for such calculations are also mentioned. 3 refs., 2 tabs

  17. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Belov, S; Kaplin, V; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2011-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  18. Multi-Language Programming Environments for High Performance Java Computing

    OpenAIRE

    Vladimir Getov; Paul Gray; Sava Mintchev; Vaidy Sunderam

    1999-01-01

    Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI) tool which provides ...

  19. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  20. Component-based software for high-performance scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  1. Component-based software for high-performance scientific computing

    International Nuclear Information System (INIS)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly

  2. Nuclear forces and high-performance computing: The perfect match

    International Nuclear Information System (INIS)

    Luu, T; Walker-Loud, A

    2009-01-01

    High-performance computing is now enabling the calculation of certain hadronic interaction parameters directly from Quantum Chromodynamics, the quantum field theory that governs the behavior of quarks and gluons and is ultimately responsible for the nuclear strong force. In this paper we briefly describe the state of the field and show how other aspects of hadronic interactions will be ascertained in the near future. We give estimates of computational requirements needed to obtain these goals, and outline a procedure for incorporating these results into the broader nuclear physics community.

  3. A High Performance VLSI Computer Architecture For Computer Graphics

    Science.gov (United States)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  4. High performance computing in science and engineering '09: transactions of the High Performance Computing Center, Stuttgart (HLRS) 2009

    National Research Council Canada - National Science Library

    Nagel, Wolfgang E; Kröner, Dietmar; Resch, Michael

    2010-01-01

    ...), NIC/JSC (J¨ u lich), and LRZ (Munich). As part of that strategic initiative, in May 2009 already NIC/JSC has installed the first phase of the GCS HPC Tier-0 resources, an IBM Blue Gene/P with roughly 300.000 Cores, this time in J¨ u lich, With that, the GCS provides the most powerful high-performance computing infrastructure in Europe alread...

  5. Design and evaluation of dynamic replication strategies for a high-performance data grid

    International Nuclear Information System (INIS)

    Ranganathan, K.; Foster, I.

    2001-01-01

    Physics experiments that generate large amounts of data need to be able to share it with researchers around the world. High performance grids facilitate the distribution of such data to geographically remote places. Dynamic replication can be used as a technique to reduce bandwidth consumption and access latency in accessing these huge amounts of data. The authors describe a simulation framework that we have developed to model a grid scenario, which enables comparative studies of alternative dynamic replication strategies. The authors present preliminary results obtained with this simulator, in which we evaluate the performance of six different replication strategies for three different kinds of access patterns. The simulation results show that the best strategy has significant savings in latency and bandwidth consumption if the access patterns contain a moderate amount of geographical locality

  6. Overview of Parallel Platforms for Common High Performance Computing

    Directory of Open Access Journals (Sweden)

    T. Fryza

    2012-04-01

    Full Text Available The paper deals with various parallel platforms used for high performance computing in the signal processing domain. More precisely, the methods exploiting the multicores central processing units such as message passing interface and OpenMP are taken into account. The properties of the programming methods are experimentally proved in the application of a fast Fourier transform and a discrete cosine transform and they are compared with the possibilities of MATLAB's built-in functions and Texas Instruments digital signal processors with very long instruction word architectures. New FFT and DCT implementations were proposed and tested. The implementation phase was compared with CPU based computing methods and with possibilities of the Texas Instruments digital signal processing library on C6747 floating-point DSPs. The optimal combination of computing methods in the signal processing domain and new, fast routines' implementation is proposed as well.

  7. High performance computing and communications: FY 1995 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

  8. High Performance Computing - Power Application Programming Interface Specification.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  9. Benchmarking high performance computing architectures with CMS’ skeleton framework

    Science.gov (United States)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  10. Southampton uni's computer whizzes develop "mini" grid

    CERN Multimedia

    Sherriff, Lucy

    2006-01-01

    "In a bid to help its students explore the potential of grid computing, the University of Southampton's Computer Science department has developed what it calls a "lightweight grid". The system has been designed to allow students to experiment with grid technology without the complexity of inherent security concerns of the real thing. (1 page)

  11. Simple, parallel, high-performance virtual machines for extreme computations

    International Nuclear Information System (INIS)

    Chokoufe Nejad, Bijan; Ohl, Thorsten; Reuter, Jurgen

    2014-11-01

    We introduce a high-performance virtual machine (VM) written in a numerically fast language like Fortran or C to evaluate very large expressions. We discuss the general concept of how to perform computations in terms of a VM and present specifically a VM that is able to compute tree-level cross sections for any number of external legs, given the corresponding byte code from the optimal matrix element generator, O'Mega. Furthermore, this approach allows to formulate the parallel computation of a single phase space point in a simple and obvious way. We analyze hereby the scaling behaviour with multiple threads as well as the benefits and drawbacks that are introduced with this method. Our implementation of a VM can run faster than the corresponding native, compiled code for certain processes and compilers, especially for very high multiplicities, and has in general runtimes in the same order of magnitude. By avoiding the tedious compile and link steps, which may fail for source code files of gigabyte sizes, new processes or complex higher order corrections that are currently out of reach could be evaluated with a VM given enough computing power.

  12. Heads in the Cloud: A Primer on Neuroimaging Applications of High Performance Computing

    Directory of Open Access Journals (Sweden)

    Anwar S. Shatil

    2015-01-01

    Full Text Available With larger data sets and more sophisticated analyses, it is becoming increasingly common for neuroimaging researchers to push (or exceed the limitations of standalone computer workstations. Nonetheless, although high-performance computing platforms such as clusters, grids and clouds are already in routine use by a small handful of neuroimaging researchers to increase their storage and/or computational power, the adoption of such resources by the broader neuroimaging community remains relatively uncommon. Therefore, the goal of the current manuscript is to: 1 inform prospective users about the similarities and differences between computing clusters, grids and clouds; 2 highlight their main advantages; 3 discuss when it may (and may not be advisable to use them; 4 review some of their potential problems and barriers to access; and finally 5 give a few practical suggestions for how interested new users can start analyzing their neuroimaging data using cloud resources. Although the aim of cloud computing is to hide most of the complexity of the infrastructure management from end-users, we recognize that this can still be an intimidating area for cognitive neuroscientists, psychologists, neurologists, radiologists, and other neuroimaging researchers lacking a strong computational background. Therefore, with this in mind, we have aimed to provide a basic introduction to cloud computing in general (including some of the basic terminology, computer architectures, infrastructure and service models, etc., a practical overview of the benefits and drawbacks, and a specific focus on how cloud resources can be used for various neuroimaging applications.

  13. Heads in the Cloud: A Primer on Neuroimaging Applications of High Performance Computing.

    Science.gov (United States)

    Shatil, Anwar S; Younas, Sohail; Pourreza, Hossein; Figley, Chase R

    2015-01-01

    With larger data sets and more sophisticated analyses, it is becoming increasingly common for neuroimaging researchers to push (or exceed) the limitations of standalone computer workstations. Nonetheless, although high-performance computing platforms such as clusters, grids and clouds are already in routine use by a small handful of neuroimaging researchers to increase their storage and/or computational power, the adoption of such resources by the broader neuroimaging community remains relatively uncommon. Therefore, the goal of the current manuscript is to: 1) inform prospective users about the similarities and differences between computing clusters, grids and clouds; 2) highlight their main advantages; 3) discuss when it may (and may not) be advisable to use them; 4) review some of their potential problems and barriers to access; and finally 5) give a few practical suggestions for how interested new users can start analyzing their neuroimaging data using cloud resources. Although the aim of cloud computing is to hide most of the complexity of the infrastructure management from end-users, we recognize that this can still be an intimidating area for cognitive neuroscientists, psychologists, neurologists, radiologists, and other neuroimaging researchers lacking a strong computational background. Therefore, with this in mind, we have aimed to provide a basic introduction to cloud computing in general (including some of the basic terminology, computer architectures, infrastructure and service models, etc.), a practical overview of the benefits and drawbacks, and a specific focus on how cloud resources can be used for various neuroimaging applications.

  14. Heads in the Cloud: A Primer on Neuroimaging Applications of High Performance Computing

    Science.gov (United States)

    Shatil, Anwar S.; Younas, Sohail; Pourreza, Hossein; Figley, Chase R.

    2015-01-01

    With larger data sets and more sophisticated analyses, it is becoming increasingly common for neuroimaging researchers to push (or exceed) the limitations of standalone computer workstations. Nonetheless, although high-performance computing platforms such as clusters, grids and clouds are already in routine use by a small handful of neuroimaging researchers to increase their storage and/or computational power, the adoption of such resources by the broader neuroimaging community remains relatively uncommon. Therefore, the goal of the current manuscript is to: 1) inform prospective users about the similarities and differences between computing clusters, grids and clouds; 2) highlight their main advantages; 3) discuss when it may (and may not) be advisable to use them; 4) review some of their potential problems and barriers to access; and finally 5) give a few practical suggestions for how interested new users can start analyzing their neuroimaging data using cloud resources. Although the aim of cloud computing is to hide most of the complexity of the infrastructure management from end-users, we recognize that this can still be an intimidating area for cognitive neuroscientists, psychologists, neurologists, radiologists, and other neuroimaging researchers lacking a strong computational background. Therefore, with this in mind, we have aimed to provide a basic introduction to cloud computing in general (including some of the basic terminology, computer architectures, infrastructure and service models, etc.), a practical overview of the benefits and drawbacks, and a specific focus on how cloud resources can be used for various neuroimaging applications. PMID:27279746

  15. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  16. High energy physics and grid computing

    International Nuclear Information System (INIS)

    Yu Chuansong

    2004-01-01

    The status of the new generation computing environment of the high energy physics experiments is introduced briefly in this paper. The development of the high energy physics experiments and the new computing requirements by the experiments are presented. The blueprint of the new generation computing environment of the LHC experiments, the history of the Grid computing, the R and D status of the high energy physics grid computing technology, the network bandwidth needed by the high energy physics grid and its development are described. The grid computing research in Chinese high energy physics community is introduced at last. (authors)

  17. Proposal for grid computing for nuclear applications

    International Nuclear Information System (INIS)

    Faridah Mohamad Idris; Wan Ahmad Tajuddin Wan Abdullah; Zainol Abidin Ibrahim; Zukhaimira Zolkapli

    2013-01-01

    Full-text: The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process. (author)

  18. Trends in high-performance computing for engineering calculations.

    Science.gov (United States)

    Giles, M B; Reguly, I

    2014-08-13

    High-performance computing has evolved remarkably over the past 20 years, and that progress is likely to continue. However, in recent years, this progress has been achieved through greatly increased hardware complexity with the rise of multicore and manycore processors, and this is affecting the ability of application developers to achieve the full potential of these systems. This article outlines the key developments on the hardware side, both in the recent past and in the near future, with a focus on two key issues: energy efficiency and the cost of moving data. It then discusses the much slower evolution of system software, and the implications of all of this for application developers. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  19. Power/energy use cases for high performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kelly, Suzanne M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Steven [National Renewable Energy Lab. (NREL), Golden, CO (United States); Elmore, Ryan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Munch, Kristin [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  20. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  1. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  2. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  3. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  4. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    expand the research infrastructure at the institution but also to enhance the high -performance computing training provided to both undergraduate and... cloud computing, supercomputing, and the availability of cheap memory and storage led to enormous amounts of data to be sifted through in forensic... High -Performance Computing (HPC) tools that can be integrated with existing curricula and support our research to modernize and dramatically advance

  5. Improved visibility computation on massive grid terrains

    NARCIS (Netherlands)

    Fishman, J.; Haverkort, H.J.; Toma, L.; Wolfson, O.; Agrawal, D.; Lu, C.-T.

    2009-01-01

    This paper describes the design and engineering of algorithms for computing visibility maps on massive grid terrains. Given a terrain T, specified by the elevations of points in a regular grid, and given a viewpoint v, the visibility map or viewshed of v is the set of grid points of T that are

  6. Cloud Computing and Smart Grids

    Directory of Open Access Journals (Sweden)

    Janina POPEANGĂ

    2012-10-01

    Full Text Available Increasing concern about energy consumption is leading to infrastructure that supports real-time, two-way communication between utilities and consumers, and allows software systems at both ends to control and manage power use. To manage communications to millions of endpoints in a secure, scalable and highly-available environment and to achieve these twin goals of ‘energy conservation’ and ‘demand response’, utilities must extend the same communication network management processes and tools used in the data center to the field.This paper proposes that cloud computing technology, because of its low cost, flexible and redundant architecture and fast response time, has the functionality needed to provide the security, interoperability and performance required for large-scale smart grid applications.

  7. High performance computing environment for multidimensional image analysis.

    Science.gov (United States)

    Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo

    2007-07-10

    The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.

  8. The Future of Software Engineering for High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Pope, G [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-07-16

    DOE ASCR requested that from May through mid-July 2015 a study group identify issues and recommend solutions from a software engineering perspective transitioning into the next generation of High Performance Computing. The approach used was to ask some of the DOE complex experts who will be responsible for doing this work to contribute to the study group. The technique used was to solicit elevator speeches: a short and concise write up done as if the author was a speaker with only a few minutes to convince a decision maker of their top issues. Pages 2-18 contain the original texts of the contributed elevator speeches and end notes identifying the 20 contributors. The study group also ranked the importance of each topic, and those scores are displayed with each topic heading. A perfect score (and highest priority) is three, two is medium priority, and one is lowest priority. The highest scoring topic areas were software engineering and testing resources; the lowest scoring area was compliance to DOE standards. The following two paragraphs are an elevator speech summarizing the contributed elevator speeches. Each sentence or phrase in the summary is hyperlinked to its source via a numeral embedded in the text. A risk one liner has also been added to each topic to allow future risk tracking and mitigation.

  9. A checkpoint compression study for high-performance computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Ibtesham, Dewan [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science; Ferreira, Kurt B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Scalable System Software Dept.; Arnold, Dorian [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science

    2015-02-17

    As high-performance computing systems continue to increase in size and complexity, higher failure rates and increased overheads for checkpoint/restart (CR) protocols have raised concerns about the practical viability of CR protocols for future systems. Previously, compression has proven to be a viable approach for reducing checkpoint data volumes and, thereby, reducing CR protocol overhead leading to improved application performance. In this article, we further explore compression-based CR optimization by exploring its baseline performance and scaling properties, evaluating whether improved compression algorithms might lead to even better application performance and comparing checkpoint compression against and alongside other software- and hardware-based optimizations. Our results highlights are: (1) compression is a very viable CR optimization; (2) generic, text-based compression algorithms appear to perform near optimally for checkpoint data compression and faster compression algorithms will not lead to better application performance; (3) compression-based optimizations fare well against and alongside other software-based optimizations; and (4) while hardware-based optimizations outperform software-based ones, they are not as cost effective.

  10. Lightweight Provenance Service for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Dong; Chen, Yong; Carns, Philip; Jenkins, John; Ross, Robert

    2017-09-09

    Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. The need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.

  11. Multi-Language Programming Environments for High Performance Java Computing

    Directory of Open Access Journals (Sweden)

    Vladimir Getov

    1999-01-01

    Full Text Available Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI tool which provides application programmers wishing to use Java with immediate accessibility to existing scientific packages. The JCI tool also facilitates rapid development and reuse of existing code. These benefits are provided at minimal cost to the programmer. While beneficial to the programmer, the additional advantages of mixed‐language programming in terms of application performance and portability are addressed in detail within the context of this paper. In addition, we discuss how the JCI tool is complementing other ongoing projects such as IBM’s High‐Performance Compiler for Java (HPCJ and IceT’s metacomputing environment.

  12. Enabling the ATLAS Experiment at the LHC for High Performance Computing

    CERN Document Server

    AUTHOR|(CDS)2091107; Ereditato, Antonio

    In this thesis, I studied the feasibility of running computer data analysis programs from the Worldwide LHC Computing Grid, in particular large-scale simulations of the ATLAS experiment at the CERN LHC, on current general purpose High Performance Computing (HPC) systems. An approach for integrating HPC systems into the Grid is proposed, which has been implemented and tested on the „Todi” HPC machine at the Swiss National Supercomputing Centre (CSCS). Over the course of the test, more than 500000 CPU-hours of processing time have been provided to ATLAS, which is roughly equivalent to the combined computing power of the two ATLAS clusters at the University of Bern. This showed that current HPC systems can be used to efficiently run large-scale simulations of the ATLAS detector and of the detected physics processes. As a first conclusion of my work, one can argue that, in perspective, running large-scale tasks on a few large machines might be more cost-effective than running on relatively small dedicated com...

  13. Grid computing faces IT industry test

    CERN Multimedia

    Magno, L

    2003-01-01

    Software company Oracle Corp. unveiled it's Oracle 10g grid computing platform at the annual OracleWorld user convention in San Francisco. It gave concrete examples of how grid computing can be a viable option outside the scientific community where the concept was born (1 page).

  14. Grid computing infrastructure, service, and applications

    CERN Document Server

    Jie, Wei; Chen, Jinjun

    2009-01-01

    Offering a comprehensive discussion of advances in grid computing, this book summarizes the concepts, methods, technologies, and applications. It covers topics such as philosophy, middleware, architecture, services, and applications. It also includes technical details to demonstrate how grid computing works in the real world

  15. Analysis and Modeling of Social In uence in High Performance Computing Workloads

    KAUST Repository

    Zheng, Shuai

    2011-06-01

    High Performance Computing (HPC) is becoming a common tool in many research areas. Social influence (e.g., project collaboration) among increasing users of HPC systems creates bursty behavior in underlying workloads. This bursty behavior is increasingly common with the advent of grid computing and cloud computing. Mining the user bursty behavior is important for HPC workloads prediction and scheduling, which has direct impact on overall HPC computing performance. A representative work in this area is the Mixed User Group Model (MUGM), which clusters users according to the resource demand features of their submissions, such as duration time and parallelism. However, MUGM has some difficulties when implemented in real-world system. First, representing user behaviors by the features of their resource demand is usually difficult. Second, these features are not always available. Third, measuring the similarities among users is not a well-defined problem. In this work, we propose a Social Influence Model (SIM) to identify, analyze, and quantify the level of social influence across HPC users. The advantage of the SIM model is that it finds HPC communities by analyzing user job submission time, thereby avoiding the difficulties of MUGM. An offline algorithm and a fast-converging, computationally-efficient online learning algorithm for identifying social groups are proposed. Both offline and online algorithms are applied on several HPC and grid workloads, including Grid 5000, EGEE 2005 and 2007, and KAUST Supercomputing Lab (KSL) BGP data. From the experimental results, we show the existence of a social graph, which is characterized by a pattern of dominant users and followers. In order to evaluate the effectiveness of identified user groups, we show the pattern discovered by the offline algorithm follows a power-law distribution, which is consistent with those observed in mainstream social networks. We finally conclude the thesis and discuss future directions of our work.

  16. FY 1995 Blue Book: High Performance Computing and Communications: Technology for the National Information Infrastructure

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — The Federal High Performance Computing and Communications HPCC Program was created to accelerate the development of future generations of high performance computers...

  17. High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away

    Science.gov (United States)

    Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.

    2012-09-01

    , and so the costs of running applications vary widely according to how they use resources. The cloud is well suited to processing CPU-bound (and memory bound) workflows such as the periodogram code, given the relatively low cost of processing in comparison with I/O operations. I/O-bound applications such as Montage perform best on high-performance clusters with fast networks and parallel file-systems. Science-driven Cyberinfrastructure: Montage has been widely used as a driver application to develop workflow management services, such as task scheduling in distributed environments, designing fault tolerance techniques for job schedulers, and developing workflow orchestration techniques. Running Parallel Applications Across Distributed Cloud Environments: Data processing will eventually take place in parallel distributed across cyber infrastructure environments having different architectures. We have used the Pegasus Work Management System (WMS) to successfully run applications across three very different environments: TeraGrid, OSG (Open Science Grid), and FutureGrid. Provisioning resources across different grids and clouds (also referred to as Sky Computing), involves establishing a distributed environment, where issues of, e.g, remote job submission, data management, and security need to be addressed. This environment also requires building virtual machine images that can run in different environments. Usually, each cloud provides basic images that can be customized with additional software and services. In most of our work, we provisioned compute resources using a custom application, called Wrangler. Pegasus WMS abstracts the architectures of the compute environments away from the end-user, and can be considered a first-generation tool suitable for scientists to run their applications on disparate environments.

  18. The LHC Computing Grid Project

    CERN Multimedia

    Åkesson, T

    In the last ATLAS eNews I reported on the preparations for the LHC Computing Grid Project (LCGP). Significant LCGP resources were mobilized during the summer, and there have been numerous iterations on the formal paper to put forward to the CERN Council to establish the LCGP. ATLAS, and also the other LHC-experiments, has been very active in this process to maximally influence the outcome. Our main priorities were to ensure that the global aspects are properly taken into account, that the CERN non-member states are also included in the structure, that the experiments are properly involved in the LCGP execution and that the LCGP takes operative responsibility during the data challenges. A Project Launch Board (PLB) was active from the end of July until the 10th of September. It was chaired by Hans Hoffmann and had the IT division leader as secretary. Each experiment had a representative (me for ATLAS), and the large CERN member states were each represented while the smaller were represented as clusters ac...

  19. Grid computing in large pharmaceutical molecular modeling.

    Science.gov (United States)

    Claus, Brian L; Johnson, Stephen R

    2008-07-01

    Most major pharmaceutical companies have employed grid computing to expand their compute resources with the intention of minimizing additional financial expenditure. Historically, one of the issues restricting widespread utilization of the grid resources in molecular modeling is the limited set of suitable applications amenable to coarse-grained parallelization. Recent advances in grid infrastructure technology coupled with advances in application research and redesign will enable fine-grained parallel problems, such as quantum mechanics and molecular dynamics, which were previously inaccessible to the grid environment. This will enable new science as well as increase resource flexibility to load balance and schedule existing workloads.

  20. SCEAPI: A unified Restful Web API for High-Performance Computing

    Science.gov (United States)

    Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi

    2017-10-01

    The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.

  1. Multiobjective Variable Neighborhood Search algorithm for scheduling independent jobs on computational grid

    Directory of Open Access Journals (Sweden)

    S. Selvi

    2015-07-01

    Full Text Available Grid computing solves high performance and high-throughput computing problems through sharing resources ranging from personal computers to super computers distributed around the world. As the grid environments facilitate distributed computation, the scheduling of grid jobs has become an important issue. In this paper, an investigation on implementing Multiobjective Variable Neighborhood Search (MVNS algorithm for scheduling independent jobs on computational grid is carried out. The performance of the proposed algorithm has been evaluated with Min–Min algorithm, Simulated Annealing (SA and Greedy Randomized Adaptive Search Procedure (GRASP algorithm. Simulation results show that MVNS algorithm generally performs better than other metaheuristics methods.

  2. Accessible high performance computing solutions for near real-time image processing for time critical applications

    Science.gov (United States)

    Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek

    2009-09-01

    High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.

  3. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1991-03-15

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour.

  4. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour

  5. High performance computing network for cloud environment using simulators

    OpenAIRE

    Singh, N. Ajith; Hemalatha, M.

    2012-01-01

    Cloud computing is the next generation computing. Adopting the cloud computing is like signing up new form of a website. The GUI which controls the cloud computing make is directly control the hardware resource and your application. The difficulty part in cloud computing is to deploy in real environment. Its' difficult to know the exact cost and it's requirement until and unless we buy the service not only that whether it will support the existing application which is available on traditional...

  6. High performance parallel computing of flows in complex geometries: I. Methods

    International Nuclear Information System (INIS)

    Gourdain, N; Gicquel, L; Montagnac, M; Vermorel, O; Staffelbach, G; Garcia, M; Boussuge, J-F; Gazaix, M; Poinsot, T

    2009-01-01

    Efficient numerical tools coupled with high-performance computers, have become a key element of the design process in the fields of energy supply and transportation. However flow phenomena that occur in complex systems such as gas turbines and aircrafts are still not understood mainly because of the models that are needed. In fact, most computational fluid dynamics (CFD) predictions as found today in industry focus on a reduced or simplified version of the real system (such as a periodic sector) and are usually solved with a steady-state assumption. This paper shows how to overcome such barriers and how such a new challenge can be addressed by developing flow solvers running on high-end computing platforms, using thousands of computing cores. Parallel strategies used by modern flow solvers are discussed with particular emphases on mesh-partitioning, load balancing and communication. Two examples are used to illustrate these concepts: a multi-block structured code and an unstructured code. Parallel computing strategies used with both flow solvers are detailed and compared. This comparison indicates that mesh-partitioning and load balancing are more straightforward with unstructured grids than with multi-block structured meshes. However, the mesh-partitioning stage can be challenging for unstructured grids, mainly due to memory limitations of the newly developed massively parallel architectures. Finally, detailed investigations show that the impact of mesh-partitioning on the numerical CFD solutions, due to rounding errors and block splitting, may be of importance and should be accurately addressed before qualifying massively parallel CFD tools for a routine industrial use.

  7. Adaptively detecting changes in Autonomic Grid Computing

    KAUST Repository

    Zhang, Xiangliang; Germain, Cé cile; Sebag, Michè le

    2010-01-01

    Detecting the changes is the common issue in many application fields due to the non-stationary distribution of the applicative data, e.g., sensor network signals, web logs and gridrunning logs. Toward Autonomic Grid Computing, adaptively detecting

  8. EU grid computing effort takes on malaria

    CERN Multimedia

    Lawrence, Stacy

    2006-01-01

    Malaria is the world's most common parasitic infection, affecting more thatn 500 million people annually and killing more than 1 million. In order to help combat malaria, CERN has launched a grid computing effort (1 page)

  9. VIP visit of LHC Computing Grid Project

    CERN Multimedia

    Krajewski, Yann Tadeusz

    2015-01-01

    VIP visit of LHC Computing Grid Project with Dr -.Ing. Tarek Kamel [Senior Advisor to the President for Government Engagement, ICANN Geneva Office] and Dr Nigel Hickson [VP, IGO Engagement, ICANN Geneva Office

  10. Grid computing in high-energy physics

    International Nuclear Information System (INIS)

    Bischof, R.; Kuhn, D.; Kneringer, E.

    2003-01-01

    Full text: The future high energy physics experiments are characterized by an enormous amount of data delivered by the large detectors presently under construction e.g. at the Large Hadron Collider and by a large number of scientists (several thousands) requiring simultaneous access to the resulting experimental data. Since it seems unrealistic to provide the necessary computing and storage resources at one single place, (e.g. CERN), the concept of grid computing i.e. the use of distributed resources, will be chosen. The DataGrid project (under the leadership of CERN) develops, based on the Globus toolkit, the software necessary for computation and analysis of shared large-scale databases in a grid structure. The high energy physics group Innsbruck participates with several resources in the DataGrid test bed. In this presentation our experience as grid users and resource provider is summarized. In cooperation with the local IT-center (ZID) we installed a flexible grid system which uses PCs (at the moment 162) in student's labs during nights, weekends and holidays, which is especially used to compare different systems (local resource managers, other grid software e.g. from the Nordugrid project) and to supply a test bed for the future Austrian Grid (AGrid). (author)

  11. Building a cluster computer for the computing grid of tomorrow

    International Nuclear Information System (INIS)

    Wezel, J. van; Marten, H.

    2004-01-01

    The Grid Computing Centre Karlsruhe takes part in the development, test and deployment of hardware and cluster infrastructure, grid computing middleware, and applications for particle physics. The construction of a large cluster computer with thousands of nodes and several PB data storage capacity is a major task and focus of research. CERN based accelerator experiments will use GridKa, one of only 8 world wide Tier-1 computing centers, for its huge computer demands. Computing and storage is provided already for several other running physics experiments on the exponentially expanding cluster. (orig.)

  12. A Grid-Based Cyber Infrastructure for High Performance Chemical Dynamics Simulations

    Directory of Open Access Journals (Sweden)

    Khadka Prashant

    2008-10-01

    Full Text Available Chemical dynamics simulation is an effective means to study atomic level motions of molecules, collections of molecules, liquids, surfaces, interfaces of materials, and chemical reactions. To make chemical dynamics simulations globally accessible to a broad range of users, recently a cyber infrastructure was developed that provides an online portal to VENUS, a popular chemical dynamics simulation program package, to allow people to submit simulation jobs that will be executed on the web server machine. In this paper, we report new developments of the cyber infrastructure for the improvement of its quality of service by dispatching the submitted simulations jobs from the web server machine onto a cluster of workstations for execution, and by adding an animation tool, which is optimized for animating the simulation results. The separation of the server machine from the simulation-running machine improves the service quality by increasing the capacity to serve more requests simultaneously with even reduced web response time, and allows the execution of large scale, time-consuming simulation jobs on the powerful workstation cluster. With the addition of an animation tool, the cyber infrastructure automatically converts, upon the selection of the user, some simulation results into an animation file that can be viewed on usual web browsers without requiring installation of any special software on the user computer. Since animation is essential for understanding the results of chemical dynamics simulations, this animation capacity provides a better way for understanding simulation details of the chemical dynamics. By combining computing resources at locations under different administrative controls, this cyber infrastructure constitutes a grid environment providing physically and administratively distributed functionalities through a single easy-to-use online portal

  13. Department of Energy research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-08-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programmatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models, the execution of which is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex, and consequently it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  14. Performance management of high performance computing for medical image processing in Amazon Web Services

    Science.gov (United States)

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-03-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  15. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services.

    Science.gov (United States)

    Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha

    2016-02-27

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  16. The ongoing investigation of high performance parallel computing in HEP

    CERN Document Server

    Peach, Kenneth J; Böck, R K; Dobinson, Robert W; Hansroul, M; Norton, Alan Robert; Willers, Ian Malcolm; Baud, J P; Carminati, F; Gagliardi, F; McIntosh, E; Metcalf, M; Robertson, L; CERN. Geneva. Detector Research and Development Committee

    1993-01-01

    Past and current exploitation of parallel computing in High Energy Physics is summarized and a list of R & D projects in this area is presented. The applicability of new parallel hardware and software to physics problems is investigated, in the light of the requirements for computing power of LHC experiments and the current trends in the computer industry. Four main themes are discussed (possibilities for a finer grain of parallelism; fine-grain communication mechanism; usable parallel programming environment; different programming models and architectures, using standard commercial products). Parallel computing technology is potentially of interest for offline and vital for real time applications in LHC. A substantial investment in applications development and evaluation of state of the art hardware and software products is needed. A solid development environment is required at an early stage, before mainline LHC program development begins.

  17. Distributed metadata in a high performance computing environment

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua; Liu, Xuezhao; Tang, Haiying

    2017-07-11

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination that a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.

  18. Software Applications on the Peregrine System | High-Performance Computing

    Science.gov (United States)

    Algebraic Modeling System (GAMS) Statistics and analysis High-level modeling system for mathematical reactivity. Gurobi Optimizer Statistics and analysis Solver for mathematical programming LAMMPS Chemistry and , reactivities, and vibrational, electronic and NMR spectra. R Statistical Computing Environment Statistics and

  19. Benchmark Numerical Toolkits for High Performance Computing, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Computational codes in physics and engineering often use implicit solution algorithms that require linear algebra tools such as Ax=b solvers, eigenvalue,...

  20. Multicore Challenges and Benefits for High Performance Scientific Computing

    Directory of Open Access Journals (Sweden)

    Ida M.B. Nielsen

    2008-01-01

    Full Text Available Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexity of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.

  1. Scalability of DL_POLY on High Performance Computing Platform

    CSIR Research Space (South Africa)

    Mabakane, Mabule S

    2017-12-01

    Full Text Available stream_source_info Mabakanea_19979_2017.pdf.txt stream_content_type text/plain stream_size 33716 Content-Encoding UTF-8 stream_name Mabakanea_19979_2017.pdf.txt Content-Type text/plain; charset=UTF-8 SACJ 29(3) December... when using many processors within the compute nodes of the supercomputer. The type of the processors of compute nodes and their memory also play an important role in the overall performance of the parallel application running on a supercomputer. DL...

  2. Running Batch Jobs on Peregrine | High-Performance Computing | NREL

    Science.gov (United States)

    and run your application. Users typically create or edit job scripts using a text editor such as vi Using Resource Feature to Request Different Node Types Peregrine has several types of compute nodes , which differ in the amount of memory and number of processor cores. The majority of the nodes have 24

  3. Running Interactive Jobs on Peregrine | High-Performance Computing | NREL

    Science.gov (United States)

    shell prompt, which allows users to execute commands and scripts as they would on the login nodes. Login performed on the compute nodes rather than on login nodes. This page provides instructions and examples of , start GUIs etc. and the commands will execute on that node instead of on the login node. The -V option

  4. Simulating elastic light scattering using high performance computing methods

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Verbraeck, A.; Kerckhoffs, E.J.H.

    1993-01-01

    The Coupled Dipole method, as originally formulated byPurcell and Pennypacker, is a very powerful method tosimulate the Elastic Light Scattering from arbitraryparticles. This method, which is a particle simulationmodel for Computational Electromagnetics, has one majordrawback: if the size of the

  5. High performance computer code for molecular dynamics simulations

    International Nuclear Information System (INIS)

    Levay, I.; Toekesi, K.

    2007-01-01

    Complete text of publication follows. Molecular Dynamics (MD) simulation is a widely used technique for modeling complicated physical phenomena. Since 2005 we are developing a MD simulations code for PC computers. The computer code is written in C++ object oriented programming language. The aim of our work is twofold: a) to develop a fast computer code for the study of random walk of guest atoms in Be crystal, b) 3 dimensional (3D) visualization of the particles motion. In this case we mimic the motion of the guest atoms in the crystal (diffusion-type motion), and the motion of atoms in the crystallattice (crystal deformation). Nowadays, it is common to use Graphics Devices in intensive computational problems. There are several ways to use this extreme processing performance, but never before was so easy to programming these devices as now. The CUDA (Compute Unified Device) Architecture introduced by nVidia Corporation in 2007 is a very useful for every processor hungry application. A Unified-architecture GPU include 96-128, or more stream processors, so the raw calculation performance is 576(!) GFLOPS. It is ten times faster, than the fastest dual Core CPU [Fig.1]. Our improved MD simulation software uses this new technology, which speed up our software and the code run 10 times faster in the critical calculation code segment. Although the GPU is a very powerful tool, it has a strongly paralleled structure. It means, that we have to create an algorithm, which works on several processors without deadlock. Our code currently uses 256 threads, shared and constant on-chip memory, instead of global memory, which is 100 times slower than others. It is possible to implement the total algorithm on GPU, therefore we do not need to download and upload the data in every iteration. On behalf of maximal throughput, every thread run with the same instructions

  6. High performance stream computing for particle beam transport simulations

    International Nuclear Information System (INIS)

    Appleby, R; Bailey, D; Higham, J; Salt, M

    2008-01-01

    Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed

  7. A high level language for a high performance computer

    Science.gov (United States)

    Perrott, R. H.

    1978-01-01

    The proposed computational aerodynamic facility will join the ranks of the supercomputers due to its architecture and increased execution speed. At present, the languages used to program these supercomputers have been modifications of programming languages which were designed many years ago for sequential machines. A new programming language should be developed based on the techniques which have proved valuable for sequential programming languages and incorporating the algorithmic techniques required for these supercomputers. The design objectives for such a language are outlined.

  8. WinHPC System Configuration | High-Performance Computing | NREL

    Science.gov (United States)

    ), login node (WinHPC02) and worker/compute nodes. The head node acts as the file, DNS, and license server . The login node is where the users connect to access the cluster. Node 03 has dual Intel Xeon E5530 2008 R2 HPC Edition. The login node, WinHPC02, is where users login to access the system. This is where

  9. Architecture and Programming Models for High Performance Intensive Computation

    Science.gov (United States)

    2016-06-29

    commands from the data processing center to the sensors is needed. It has been noted that the ubiquity of mobile communication devices offers the...commands from a Processing Facility by way of mobile Relay Stations. The activity of each component of this model other than the Merge module can be...evaluation of the initial system implementation. Gao also was in charge of the development of Fresh Breeze architecture backend on new many-core computers

  10. Using High Performance Computing to Support Water Resource Planning

    Energy Technology Data Exchange (ETDEWEB)

    Groves, David G. [RAND Corporation, Santa Monica, CA (United States); Lembert, Robert J. [RAND Corporation, Santa Monica, CA (United States); May, Deborah W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Leek, James R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Syme, James [RAND Corporation, Santa Monica, CA (United States)

    2015-10-22

    In recent years, decision support modeling has embraced deliberation-withanalysis— an iterative process in which decisionmakers come together with experts to evaluate a complex problem and alternative solutions in a scientifically rigorous and transparent manner. Simulation modeling supports decisionmaking throughout this process; visualizations enable decisionmakers to assess how proposed strategies stand up over time in uncertain conditions. But running these simulation models over standard computers can be slow. This, in turn, can slow the entire decisionmaking process, interrupting valuable interaction between decisionmakers and analytics.

  11. Insightful Workflow For Grid Computing

    Energy Technology Data Exchange (ETDEWEB)

    Dr. Charles Earl

    2008-10-09

    We developed a workflow adaptation and scheduling system for Grid workflow. The system currently interfaces with and uses the Karajan workflow system. We developed machine learning agents that provide the planner/scheduler with information needed to make decisions about when and how to replan. The Kubrick restructures workflow at runtime, making it unique among workflow scheduling systems. The existing Kubrick system provides a platform on which to integrate additional quality of service constraints and in which to explore the use of an ensemble of scheduling and planning algorithms. This will be the principle thrust of our Phase II work.

  12. Computing Flows Using Chimera and Unstructured Grids

    Science.gov (United States)

    Liou, Meng-Sing; Zheng, Yao

    2006-01-01

    DRAGONFLOW is a computer program that solves the Navier-Stokes equations of flows in complexly shaped three-dimensional regions discretized by use of a direct replacement of arbitrary grid overlapping by nonstructured (DRAGON) grid. A DRAGON grid (see figure) is a combination of a chimera grid (a composite of structured subgrids) and a collection of unstructured subgrids. DRAGONFLOW incorporates modified versions of two prior Navier-Stokes-equation-solving programs: OVERFLOW, which is designed to solve on chimera grids; and USM3D, which is used to solve on unstructured grids. A master module controls the invocation of individual modules in the libraries. At each time step of a simulated flow, DRAGONFLOW is invoked on the chimera portion of the DRAGON grid in alternation with USM3D, which is invoked on the unstructured subgrids of the DRAGON grid. The USM3D and OVERFLOW modules then immediately exchange their solutions and other data. As a result, USM3D and OVERFLOW are coupled seamlessly.

  13. FAULT TOLERANCE IN MOBILE GRID COMPUTING

    OpenAIRE

    Aghila Rajagopal; M.A. Maluk Mohamed

    2014-01-01

    This paper proposes a novel model for Surrogate Object based paradigm in mobile grid environment for achieving a Fault Tolerance. Basically Mobile Grid Computing Model focuses on Service Composition and Resource Sharing Process. In order to increase the performance of the system, Fault Recovery plays a vital role. In our Proposed System for Recovery point, Surrogate Object Based Checkpoint Recovery Model is introduced. This Checkpoint Recovery model depends on the Surrogate Object and the Fau...

  14. High performance parallel computing of flows in complex geometries: II. Applications

    International Nuclear Information System (INIS)

    Gourdain, N; Gicquel, L; Staffelbach, G; Vermorel, O; Duchaine, F; Boussuge, J-F; Poinsot, T

    2009-01-01

    Present regulations in terms of pollutant emissions, noise and economical constraints, require new approaches and designs in the fields of energy supply and transportation. It is now well established that the next breakthrough will come from a better understanding of unsteady flow effects and by considering the entire system and not only isolated components. However, these aspects are still not well taken into account by the numerical approaches or understood whatever the design stage considered. The main challenge is essentially due to the computational requirements inferred by such complex systems if it is to be simulated by use of supercomputers. This paper shows how new challenges can be addressed by using parallel computing platforms for distinct elements of a more complex systems as encountered in aeronautical applications. Based on numerical simulations performed with modern aerodynamic and reactive flow solvers, this work underlines the interest of high-performance computing for solving flow in complex industrial configurations such as aircrafts, combustion chambers and turbomachines. Performance indicators related to parallel computing efficiency are presented, showing that establishing fair criterions is a difficult task for complex industrial applications. Examples of numerical simulations performed in industrial systems are also described with a particular interest for the computational time and the potential design improvements obtained with high-fidelity and multi-physics computing methods. These simulations use either unsteady Reynolds-averaged Navier-Stokes methods or large eddy simulation and deal with turbulent unsteady flows, such as coupled flow phenomena (thermo-acoustic instabilities, buffet, etc). Some examples of the difficulties with grid generation and data analysis are also presented when dealing with these complex industrial applications.

  15. Mixed-Language High-Performance Computing for Plasma Simulations

    Directory of Open Access Journals (Sweden)

    Quanming Lu

    2003-01-01

    Full Text Available Java is receiving increasing attention as the most popular platform for distributed computing. However, programmers are still reluctant to embrace Java as a tool for writing scientific and engineering applications due to its still noticeable performance drawbacks compared with other programming languages such as Fortran or C. In this paper, we present a hybrid Java/Fortran implementation of a parallel particle-in-cell (PIC algorithm for plasma simulations. In our approach, the time-consuming components of this application are designed and implemented as Fortran subroutines, while less calculation-intensive components usually involved in building the user interface are written in Java. The two types of software modules have been glued together using the Java native interface (JNI. Our mixed-language PIC code was tested and its performance compared with pure Java and Fortran versions of the same algorithm on a Sun E6500 SMP system and a Linux cluster of Pentium~III machines.

  16. Electromagnetic Modeling of Human Body Using High Performance Computing

    Science.gov (United States)

    Ng, Cho-Kuen; Beall, Mark; Ge, Lixin; Kim, Sanghoek; Klaas, Ottmar; Poon, Ada

    Realistic simulation of electromagnetic wave propagation in the actual human body can expedite the investigation of the phenomenon of harvesting implanted devices using wireless powering coupled from external sources. The parallel electromagnetics code suite ACE3P developed at SLAC National Accelerator Laboratory is based on the finite element method for high fidelity accelerator simulation, which can be enhanced to model electromagnetic wave propagation in the human body. Starting with a CAD model of a human phantom that is characterized by a number of tissues, a finite element mesh representing the complex geometries of the individual tissues is built for simulation. Employing an optimal power source with a specific pattern of field distribution, the propagation and focusing of electromagnetic waves in the phantom has been demonstrated. Substantial speedup of the simulation is achieved by using multiple compute cores on supercomputers.

  17. Big Data and High-Performance Computing in Global Seismology

    Science.gov (United States)

    Bozdag, Ebru; Lefebvre, Matthieu; Lei, Wenjie; Peter, Daniel; Smith, James; Komatitsch, Dimitri; Tromp, Jeroen

    2014-05-01

    Much of our knowledge of Earth's interior is based on seismic observations and measurements. Adjoint methods provide an efficient way of incorporating 3D full wave propagation in iterative seismic inversions to enhance tomographic images and thus our understanding of processes taking place inside the Earth. Our aim is to take adjoint tomography, which has been successfully applied to regional and continental scale problems, further to image the entire planet. This is one of the extreme imaging challenges in seismology, mainly due to the intense computational requirements and vast amount of high-quality seismic data that can potentially be assimilated. We have started low-resolution inversions (T > 30 s and T > 60 s for body and surface waves, respectively) with a limited data set (253 carefully selected earthquakes and seismic data from permanent and temporary networks) on Oak Ridge National Laboratory's Cray XK7 "Titan" system. Recent improvements in our 3D global wave propagation solvers, such as a GPU version of the SPECFEM3D_GLOBE package, will enable us perform higher-resolution (T > 9 s) and longer duration (~180 m) simulations to take the advantage of high-frequency body waves and major-arc surface waves, thereby improving imbalanced ray coverage as a result of the uneven global distribution of sources and receivers. Our ultimate goal is to use all earthquakes in the global CMT catalogue within the magnitude range of our interest and data from all available seismic networks. To take the full advantage of computational resources, we need a solid framework to manage big data sets during numerical simulations, pre-processing (i.e., data requests and quality checks, processing data, window selection, etc.) and post-processing (i.e., pre-conditioning and smoothing kernels, etc.). We address the bottlenecks in our global seismic workflow, which are mainly coming from heavy I/O traffic during simulations and the pre- and post-processing stages, by defining new data

  18. Discovery Mondays: 'The Grid: a universal computer'

    CERN Multimedia

    2006-01-01

    How can one store and analyse the 15 million billion pieces of data that the LHC will produce each year with a computer that isn't the size of a sky-scraper? The IT experts have found the answer: the Grid, which will harness the power of tens of thousands of computers in the world by putting them together on one network and making them work like a single computer achieving a power that has not yet been matched. The Grid, inspired from the Web, already exists - in fact, several of them exist in the field of science. The European EGEE project, led by CERN, contributes not only to the study of particle physics but to medical research as well, notably in the study of malaria and avian flu. The next Discovery Monday invites you to explore this futuristic computing technology. The 'Grid Masters' of CERN have prepared lively animations to help you understand how the Grid works. Children can practice saving the planet on the Grid video game. You will also discover other applications such as UNOSAT, a United Nations...

  19. FY 1992 Blue Book: Grand Challenges: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  20. FY 1993 Blue Book: Grand Challenges 1993: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  1. The contribution of high-performance computing and modelling for industrial development

    CSIR Research Space (South Africa)

    Sithole, Happy

    2017-10-01

    Full Text Available Performance Computing and Modelling for Industrial Development Dr Happy Sithole and Dr Onno Ubbink 2 Strategic context • High-performance computing (HPC) combined with machine Learning and artificial intelligence present opportunities to non...

  2. Toward server-side, high performance climate change data analytics in the Earth System Grid Federation (ESGF) eco-system

    Science.gov (United States)

    Fiore, Sandro; Williams, Dean; Aloisio, Giovanni

    2016-04-01

    In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated (e.g., the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). Most of the tools currently available for scientific data analysis in the climate domain fail at large scale since they: (1) are desktop based and need the data locally; (2) are sequential, so do not benefit from available multicore/parallel machines; (3) do not provide declarative languages to express scientific data analysis tasks; (4) are domain-specific, which ties their adoption to a specific domain; and (5) do not provide a workflow support, to enable the definition of complex "experiments". The Ophidia project aims at facing most of the challenges highlighted above by providing a big data analytics framework for eScience. Ophidia provides declarative, server-side, and parallel data analysis, jointly with an internal storage model able to efficiently deal with multidimensional data and a hierarchical data organization to manage large data volumes ("datacubes"). The project relies on a strong background of high performance database management and OLAP systems to manage large scientific data sets. It also provides a native workflow management support, to define processing chains and workflows with tens to hundreds of data analytics operators to build real scientific use cases. With regard to interoperability aspects, the talk will present the contribution provided both to the RDA Working Group on Array Databases, and the Earth System Grid Federation (ESGF

  3. Virtual Machine Lifecycle Management in Grid and Cloud Computing

    OpenAIRE

    Schwarzkopf, Roland

    2015-01-01

    Virtualization is the foundation for two important technologies: Virtualized Grid and Cloud Computing. Virtualized Grid Computing is an extension of the Grid Computing concept introduced to satisfy the security and isolation requirements of commercial Grid users. Applications are confined in virtual machines to isolate them from each other and the data they process from other users. Apart from these important requirements, Virtual...

  4. Grid computing techniques and applications

    CERN Document Server

    Wilkinson, Barry

    2009-01-01

    ''… the most outstanding aspect of this book is its excellent structure: it is as though we have been given a map to help us move around this technology from the base to the summit … I highly recommend this book …''Jose Lloret, Computing Reviews, March 2010

  5. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs

  6. CUDA/GPU Technology : Parallel Programming For High Performance Scientific Computing

    OpenAIRE

    YUHENDRA; KUZE, Hiroaki; JOSAPHAT, Tetuko Sri Sumantyo

    2009-01-01

    [ABSTRACT]Graphics processing units (GP Us) originally designed for computer video cards have emerged as the most powerful chip in a high-performance workstation. In the high performance computation capabilities, graphic processing units (GPU) lead to much more powerful performance than conventional CPUs by means of parallel processing. In 2007, the birth of Compute Unified Device Architecture (CUDA) and CUDA-enabled GPUs by NVIDIA Corporation brought a revolution in the general purpose GPU a...

  7. High-Performance Constant Power Generation in Grid-Connected PV Systems

    DEFF Research Database (Denmark)

    Sangwongwanich, Ariya; Yang, Yongheng; Blaabjerg, Frede

    2016-01-01

    An advanced power control strategy by limiting the maximum feed-in power of PV systems has been proposed, which can ensure a fast and smooth transition between maximum power point tracking and Constant Power Generation (CPG). Regardless of the solar irradiance levels, high-performance and stable...... operation are always achieved by the proposed control strategy. It can regulate the PV output power according to any set-point, and force the PV systems to operate at the left side of the maximum power point without stability problems. Experimental results have verified the effectiveness of the proposed CPG...

  8. Synchrotron Imaging Computations on the Grid without the Computing Element

    International Nuclear Information System (INIS)

    Curri, A; Pugliese, R; Borghes, R; Kourousias, G

    2011-01-01

    Besides the heavy use of the Grid in the Synchrotron Radiation Facility (SRF) Elettra, additional special requirements from the beamlines had to be satisfied through a novel solution that we present in this work. In the traditional Grid Computing paradigm the computations are performed on the Worker Nodes of the grid element known as the Computing Element. A Grid middleware extension that our team has been working on, is that of the Instrument Element. In general it is used to Grid-enable instrumentation; and it can be seen as a neighbouring concept to that of the traditional Control Systems. As a further extension we demonstrate the Instrument Element as the steering mechanism for a series of computations. In our deployment it interfaces a Control System that manages a series of computational demanding Scientific Imaging tasks in an online manner. The instrument control in Elettra is done through a suitable Distributed Control System, a common approach in the SRF community. The applications that we present are for a beamline working in medical imaging. The solution resulted to a substantial improvement of a Computed Tomography workflow. The near-real-time requirements could not have been easily satisfied from our Grid's middleware (gLite) due to the various latencies often occurred during the job submission and queuing phases. Moreover the required deployment of a set of TANGO devices could not have been done in a standard gLite WN. Besides the avoidance of certain core Grid components, the Grid Security infrastructure has been utilised in the final solution.

  9. Financial Derivatives Market for Grid Computing

    CERN Document Server

    Aubert, David; Lindset, Snorre; Huuse, Henning

    2007-01-01

    This Master thesis studies the feasibility and properties of a financial derivatives market on Grid computing, a service for sharing computing resources over a network such as the Internet. For the European Organization for Nuclear Research (CERN) to perform research with the world's largest and most complex machine, the Large Hadron Collider (LHC), Grid computing was developed to handle the information created. In accordance with the mandate of CERN Technology Transfer (TT) group, this thesis is a part of CERN's dissemination of the Grid technology. The thesis gives a brief overview of the use of the Grid technology and where it is heading. IT trend analysts and large-scale IT vendors see this technology as key in transforming the world of IT. They predict that in a matter of years, IT will be bought as a service, instead of a good. Commoditization of IT, delivered as a service, is a paradigm shift that will have a broad impact on all parts of the IT market, as well as on the society as a whole. Political, e...

  10. Development of a Computational Steering Framework for High Performance Computing Environments on Blue Gene/P Systems

    KAUST Repository

    Danani, Bob K.

    2012-07-01

    Computational steering has revolutionized the traditional workflow in high performance computing (HPC) applications. The standard workflow that consists of preparation of an application’s input, running of a simulation, and visualization of simulation results in a post-processing step is now transformed into a real-time interactive workflow that significantly reduces development and testing time. Computational steering provides the capability to direct or re-direct the progress of a simulation application at run-time. It allows modification of application-defined control parameters at run-time using various user-steering applications. In this project, we propose a computational steering framework for HPC environments that provides an innovative solution and easy-to-use platform, which allows users to connect and interact with running application(s) in real-time. This framework uses RealityGrid as the underlying steering library and adds several enhancements to the library to enable steering support for Blue Gene systems. Included in the scope of this project is the development of a scalable and efficient steering relay server that supports many-to-many connectivity between multiple steered applications and multiple steering clients. Steered applications can range from intermediate simulation and physical modeling applications to complex computational fluid dynamics (CFD) applications or advanced visualization applications. The Blue Gene supercomputer presents special challenges for remote access because the compute nodes reside on private networks. This thesis presents an implemented solution and demonstrates it on representative applications. Thorough implementation details and application enablement steps are also presented in this thesis to encourage direct usage of this framework.

  11. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  12. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  13. Computer Simulation of the UMER Gridded Gun

    CERN Document Server

    Haber, Irving; Friedman, Alex; Grote, D P; Kishek, Rami A; Reiser, Martin; Vay, Jean-Luc; Zou, Yun

    2005-01-01

    The electron source in the University of Maryland Electron Ring (UMER) injector employs a grid 0.15 mm from the cathode to control the current waveform. Under nominal operating conditions, the grid voltage during the current pulse is sufficiently positive relative to the cathode potential to form a virtual cathode downstream of the grid. Three-dimensional computer simulations have been performed that use the mesh refinement capability of the WARP particle-in-cell code to examine a small region near the beam center in order to illustrate some of the complexity that can result from such a gridded structure. These simulations have been found to reproduce the hollowed velocity space that is observed experimentally. The simulations also predict a complicated time-dependent response to the waveform applied to the grid during the current turn-on. This complex temporal behavior appears to result directly from the dynamics of the virtual cathode formation and may therefore be representative of the expected behavior in...

  14. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    International Nuclear Information System (INIS)

    Khaleel, Mohammad A.

    2009-01-01

    This report is an account of the deliberations and conclusions of the workshop on 'Forefront Questions in Nuclear Science and the Role of High Performance Computing' held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to (1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; (2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; (3) provide nuclear physicists the opportunity to influence the development of high performance computing; and (4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  15. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  16. High Performance Hydrogen/Bromine Redox Flow Battery for Grid-Scale Energy Storage

    Energy Technology Data Exchange (ETDEWEB)

    Cho, KT; Ridgway, P; Weber, AZ; Haussener, S; Battaglia, V; Srinivasan, V

    2012-01-01

    The electrochemical behavior of a promising hydrogen/bromine redox flow battery is investigated for grid-scale energy-storage application with some of the best redox-flow-battery performance results to date, including a peak power of 1.4 W/cm(2) and a 91% voltaic efficiency at 0.4 W/cm(2) constant-power operation. The kinetics of bromine on various materials is discussed, with both rotating-disk-electrode and cell studies demonstrating that a carbon porous electrode for the bromine reaction can conduct platinum-comparable performance as long as sufficient surface area is realized. The effect of flow-cell designs and operating temperature is examined, and ohmic and mass-transfer losses are decreased by utilizing a flow-through electrode design and increasing cell temperature. Charge/discharge and discharge-rate tests also reveal that this system has highly reversible behavior and good rate capability. (C) 2012 The Electrochemical Society. [DOI: 10.1149/2.018211jes] All rights reserved.

  17. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Science.gov (United States)

    Moon, Hongsik

    changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.

  18. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda; Yokota, Rio; Keyes, David E.

    2016-01-01

    model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization

  19. Spatial Processing of Urban Acoustic Wave Fields from High-Performance Computations

    National Research Council Canada - National Science Library

    Ketcham, Stephen A; Wilson, D. K; Cudney, Harley H; Parker, Michael W

    2007-01-01

    .... The objective of this work is to develop spatial processing techniques for acoustic wave propagation data from three-dimensional high-performance computations to quantify scattering due to urban...

  20. FY 1996 Blue Book: High Performance Computing and Communications: Foundations for America`s Information Future

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — The Federal High Performance Computing and Communications HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of...

  1. Optical high-performance computing: introduction to the JOSA A and Applied Optics feature.

    Science.gov (United States)

    Caulfield, H John; Dolev, Shlomi; Green, William M J

    2009-08-01

    The feature issues in both Applied Optics and the Journal of the Optical Society of America A focus on topics of immediate relevance to the community working in the area of optical high-performance computing.

  2. FY 1997 Blue Book: High Performance Computing and Communications: Advancing the Frontiers of Information Technology

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — The Federal High Performance Computing and Communications HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of...

  3. Can We Build a Truly High Performance Computer Which is Flexible and Transparent?

    KAUST Repository

    Rojas, Jhonathan Prieto; Sevilla, Galo T.; Hussain, Muhammad Mustafa

    2013-01-01

    cost advantage. In that context, low-cost mono-crystalline bulk silicon (100) based high performance transistors are considered as the heart of today's computers. One limitation is silicon's rigidity and brittleness. Here we show a generic batch process

  4. Comprehensive Simulation Lifecycle Management for High Performance Computing Modeling and Simulation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — There are significant logistical barriers to entry-level high performance computing (HPC) modeling and simulation (M IllinoisRocstar) sets up the infrastructure for...

  5. Analysis and modeling of social influence in high performance computing workloads

    KAUST Repository

    Zheng, Shuai; Shae, Zon Yin; Zhang, Xiangliang; Jamjoom, Hani T.; Fong, Liana

    2011-01-01

    Social influence among users (e.g., collaboration on a project) creates bursty behavior in the underlying high performance computing (HPC) workloads. Using representative HPC and cluster workload logs, this paper identifies, analyzes, and quantifies

  6. Export Controls: Implementation of the 1998 Legislative Mandate for High Performance Computers

    National Research Council Canada - National Science Library

    1999-01-01

    We found that most of the 938 proposed exports of high performance computers to civilian end users in countries of concern from February 3, 1998, when procedures implementing the 1998 authorization...

  7. Bringing Federated Identity to Grid Computing

    Energy Technology Data Exchange (ETDEWEB)

    Teheran, Jeny [Fermilab

    2016-03-04

    The Fermi National Accelerator Laboratory (FNAL) is facing the challenge of providing scientific data access and grid submission to scientific collaborations that span the globe but are hosted at FNAL. Users in these collaborations are currently required to register as an FNAL user and obtain FNAL credentials to access grid resources to perform their scientific computations. These requirements burden researchers with managing additional authentication credentials, and put additional load on FNAL for managing user identities. Our design integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and MyProxy with the FNAL grid submission system to provide secure access for users from diverse experiments and collab orations without requiring each user to have authentication credentials from FNAL. The design automates the handling of certificates so users do not need to manage them manually. Although the initial implementation is for FNAL's grid submission system, the design and the core of the implementation are general and could be applied to other distributed computing systems.

  8. Grid Computing BOINC Redesign Mindmap with incentive system (gamification)

    OpenAIRE

    Kitchen, Kris

    2016-01-01

    Grid Computing BOINC Redesign Mindmap with incentive system (gamification) this is a PDF viewable of https://figshare.com/articles/Grid_Computing_BOINC_Redesign_Mindmap_with_incentive_system_gamification_/1265350

  9. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    Science.gov (United States)

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  10. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  11. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  12. Grid Computing in High Energy Physics

    International Nuclear Information System (INIS)

    Avery, Paul

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them.Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software resources, regardless of location); (4) collaboration (providing tools that allow members full and fair access to all collaboration resources and enable distributed teams to work effectively, irrespective of location); and (5) education, training and outreach (providing resources and mechanisms for training students and for communicating important information to the public).It is believed that computing infrastructures based on Data Grids and optical networks can meet these challenges and can offer data intensive enterprises in high energy physics and elsewhere a comprehensive, scalable framework for collaboration and resource sharing. A number of Data Grid projects have been underway since 1999. Interestingly, the most exciting and far ranging of these projects are led by collaborations of high energy physicists, computer scientists and scientists from other disciplines in support of experiments with massive, near-term data needs. I review progress in this

  13. Distributed computing grid experiences in CMS

    CERN Document Server

    Andreeva, Julia; Barrass, T; Bonacorsi, D; Bunn, Julian; Capiluppi, P; Corvo, M; Darmenov, N; De Filippis, N; Donno, F; Donvito, G; Eulisse, G; Fanfani, A; Fanzago, F; Filine, A; Grandi, C; Hernández, J M; Innocente, V; Jan, A; Lacaprara, S; Legrand, I; Metson, S; Newbold, D; Newman, H; Pierro, A; Silvestris, L; Steenberg, C; Stockinger, H; Taylor, Lucas; Thomas, M; Tuura, L; Van Lingen, F; Wildish, Tony

    2005-01-01

    The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data- taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure ...

  14. High-performance computing on GPUs for resistivity logging of oil and gas wells

    Science.gov (United States)

    Glinskikh, V.; Dudaev, A.; Nechaev, O.; Surodina, I.

    2017-10-01

    We developed and implemented into software an algorithm for high-performance simulation of electrical logs from oil and gas wells using high-performance heterogeneous computing. The numerical solution of the 2D forward problem is based on the finite-element method and the Cholesky decomposition for solving a system of linear algebraic equations (SLAE). Software implementations of the algorithm used the NVIDIA CUDA technology and computing libraries are made, allowing us to perform decomposition of SLAE and find its solution on central processor unit (CPU) and graphics processor unit (GPU). The calculation time is analyzed depending on the matrix size and number of its non-zero elements. We estimated the computing speed on CPU and GPU, including high-performance heterogeneous CPU-GPU computing. Using the developed algorithm, we simulated resistivity data in realistic models.

  15. Java parallel secure stream for grid computing

    International Nuclear Information System (INIS)

    Chen, J.; Akers, W.; Chen, Y.; Watson, W.

    2001-01-01

    The emergence of high speed wide area networks makes grid computing a reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve the bandwidth and to reduce latency on a high speed wide area network. The authors present a pure Java package called JPARSS (Java Parallel Secure Stream) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a gird environment without the necessity of tuning the TCP window size. Several experimental results are provided to show that using parallel stream is more effective than tuning TCP window size. In addition X.509 certificate based single sign-on mechanism and SSL based connection establishment are integrated into this package. Finally a few applications using this package will be discussed

  16. Adaptively detecting changes in Autonomic Grid Computing

    KAUST Repository

    Zhang, Xiangliang

    2010-10-01

    Detecting the changes is the common issue in many application fields due to the non-stationary distribution of the applicative data, e.g., sensor network signals, web logs and gridrunning logs. Toward Autonomic Grid Computing, adaptively detecting the changes in a grid system can help to alarm the anomalies, clean the noises, and report the new patterns. In this paper, we proposed an approach of self-adaptive change detection based on the Page-Hinkley statistic test. It handles the non-stationary distribution without the assumption of data distribution and the empirical setting of parameters. We validate the approach on the EGEE streaming jobs, and report its better performance on achieving higher accuracy comparing to the other change detection methods. Meanwhile this change detection process could help to discover the device fault which was not claimed in the system logs. © 2010 IEEE.

  17. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  18. New data processing technologies at LHC: From Grid to Cloud Computing and beyond

    International Nuclear Information System (INIS)

    De Salvo, A.

    2011-01-01

    Since a few years the LHC experiments at CERN are successfully using the Grid Computing Technologies for their distributed data processing activities, on a global scale. Recently, the experience gained with the current systems allowed the design of the future Computing Models, involving new technologies like Could Computing, virtualization and high performance distributed database access. In this paper we shall describe the new computational technologies of the LHC experiments at CERN, comparing them with the current models, in terms of features and performance.

  19. High performance computing system in the framework of the Higgs boson studies

    CERN Document Server

    Belyaev, Nikita; The ATLAS collaboration; Velikhov, Vasily; Konoplich, Rostislav

    2017-01-01

    The Higgs boson physics is one of the most important and promising fields of study in the modern high energy physics. It is important to notice, that GRID computing resources become strictly limited due to increasing amount of statistics, required for physics analyses and unprecedented LHC performance. One of the possibilities to address the shortfall of computing resources is the usage of computer institutes' clusters, commercial computing resources and supercomputers. To perform precision measurements of the Higgs boson properties in these realities, it is also highly required to have effective instruments to simulate kinematic distributions of signal events. In this talk we give a brief description of the modern distribution reconstruction method called Morphing and perform few efficiency tests to demonstrate its potential. These studies have been performed on the WLCG and Kurchatov Institute’s Data Processing Center, including Tier-1 GRID site and supercomputer as well. We also analyze the CPU efficienc...

  20. IBM announces global Grid computing solutions for banking, financial markets

    CERN Multimedia

    2003-01-01

    "IBM has announced a series of Grid projects around the world as part of its Grid computing program. They include IBM new Grid-based product offerings with business intelligence software provider SAS and other partners that address the computer-intensive needs of the banking and financial markets industry (1 page)."

  1. High performance computing in science and engineering Garching/Munich 2016

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Siegfried; Bode, Arndt; Bruechle, Helmut; Brehm, Matthias (eds.)

    2016-11-01

    Computer simulations are the well-established third pillar of natural sciences along with theory and experimentation. Particularly high performance computing is growing fast and constantly demands more and more powerful machines. To keep pace with this development, in spring 2015, the Leibniz Supercomputing Centre installed the high performance computing system SuperMUC Phase 2, only three years after the inauguration of its sibling SuperMUC Phase 1. Thereby, the compute capabilities were more than doubled. This book covers the time-frame June 2014 until June 2016. Readers will find many examples of outstanding research in the more than 130 projects that are covered in this book, with each one of these projects using at least 4 million core-hours on SuperMUC. The largest scientific communities using SuperMUC in the last two years were computational fluid dynamics simulations, chemistry and material sciences, astrophysics, and life sciences.

  2. Mesoscale Climate Evaluation Using Grid Computing

    Science.gov (United States)

    Campos Velho, H. F.; Freitas, S. R.; Souto, R. P.; Charao, A. S.; Ferraz, S.; Roberti, D. R.; Streck, N.; Navaux, P. O.; Maillard, N.; Collischonn, W.; Diniz, G.; Radin, B.

    2012-04-01

    The CLIMARS project is focused to establish an operational environment for seasonal climate prediction for the Rio Grande do Sul state, Brazil. The dynamical downscaling will be performed with the use of several software platforms and hardware infrastructure to carry out the investigation on mesoscale of the global change impact. The grid computing takes advantage of geographically spread out computer systems, connected by the internet, for enhancing the power of computation. The ensemble climate prediction is an appropriated application for processing on grid computing, because the integration of each ensemble member does not have a dependency on information from another ensemble members. The grid processing is employed to compute the 20-year climatology and the long range simulations under ensemble methodology. BRAMS (Brazilian Regional Atmospheric Model) is a mesoscale model developed from a version of the RAMS (from the Colorado State University - CSU, USA). BRAMS model is the tool for carrying out the dynamical downscaling from the IPCC scenarios. Long range BRAMS simulations will provide data for some climate (data) analysis, and supply data for numerical integration of different models: (a) Regime of the extreme events for temperature and precipitation fields: statistical analysis will be applied on the BRAMS data, (b) CCATT-BRAMS (Coupled Chemistry Aerosol Tracer Transport - BRAMS) is an environmental prediction system that will be used to evaluate if the new standards of temperature, rain regime, and wind field have a significant impact on the pollutant dispersion in the analyzed regions, (c) MGB-IPH (Portuguese acronym for the Large Basin Model (MGB), developed by the Hydraulic Research Institute, (IPH) from the Federal University of Rio Grande do Sul (UFRGS), Brazil) will be employed to simulate the alteration of the river flux under new climate patterns. Important meteorological input variables for the MGB-IPH are the precipitation (most relevant

  3. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    International Nuclear Information System (INIS)

    Pop, Florin

    2014-01-01

    Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  4. 14th annual Results and Review Workshop on High Performance Computing in Science and Engineering

    CERN Document Server

    Nagel, Wolfgang E; Resch, Michael M; Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2011; High Performance Computing in Science and Engineering '11

    2012-01-01

    This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2011. The reports cover all fields of computational science and engineering, ranging from CFD to computational physics and chemistry, to computer science, with a special emphasis on industrially relevant applications. Presenting results for both vector systems and microprocessor-based systems, the book allows readers to compare the performance levels and usability of various architectures. As HLRS

  5. InfoMall: An Innovative Strategy for High-Performance Computing and Communications Applications Development.

    Science.gov (United States)

    Mills, Kim; Fox, Geoffrey

    1994-01-01

    Describes the InfoMall, a program led by the Northeast Parallel Architectures Center (NPAC) at Syracuse University (New York). The InfoMall features a partnership of approximately 24 organizations offering linked programs in High Performance Computing and Communications (HPCC) technology integration, software development, marketing, education and…

  6. Business Models of High Performance Computing Centres in Higher Education in Europe

    Science.gov (United States)

    Eurich, Markus; Calleja, Paul; Boutellier, Roman

    2013-01-01

    High performance computing (HPC) service centres are a vital part of the academic infrastructure of higher education organisations. However, despite their importance for research and the necessary high capital expenditures, business research on HPC service centres is mostly missing. From a business perspective, it is important to find an answer to…

  7. High-Performance Computing in Neuroscience for Data-Driven Discovery, Integration, and Dissemination

    International Nuclear Information System (INIS)

    Bouchard, Kristofer E.

    2016-01-01

    A lack of coherent plans to analyze, manage, and understand data threatens the various opportunities offered by new neuro-technologies. High-performance computing will allow exploratory analysis of massive datasets stored in standardized formats, hosted in open repositories, and integrated with simulations.

  8. Requirements for high performance computing for lattice QCD. Report of the ECFA working panel

    International Nuclear Information System (INIS)

    Jegerlehner, F.; Kenway, R.D.; Martinelli, G.; Michael, C.; Pene, O.; Petersson, B.; Petronzio, R.; Sachrajda, C.T.; Schilling, K.

    2000-01-01

    This report, prepared at the request of the European Committee for Future Accelerators (ECFA), contains an assessment of the High Performance Computing resources which will be required in coming years by European physicists working in Lattice Field Theory and a review of the scientific opportunities which these resources would open. (orig.)

  9. The high performance cluster computing system for BES offline data analysis

    International Nuclear Information System (INIS)

    Sun Yongzhao; Xu Dong; Zhang Shaoqiang; Yang Ting

    2004-01-01

    A high performance cluster computing system (EPCfarm) is introduced, which used for BES offline data analysis. The setup and the characteristics of the hardware and software of EPCfarm are described. The PBS, a queue management package, and the performance of EPCfarm is presented also. (authors)

  10. High-performance computing for structural mechanics and earthquake/tsunami engineering

    CERN Document Server

    Hori, Muneo; Ohsaki, Makoto

    2016-01-01

    Huge earthquakes and tsunamis have caused serious damage to important structures such as civil infrastructure elements, buildings and power plants around the globe.  To quantitatively evaluate such damage processes and to design effective prevention and mitigation measures, the latest high-performance computational mechanics technologies, which include telascale to petascale computers, can offer powerful tools. The phenomena covered in this book include seismic wave propagation in the crust and soil, seismic response of infrastructure elements such as tunnels considering soil-structure interactions, seismic response of high-rise buildings, seismic response of nuclear power plants, tsunami run-up over coastal towns and tsunami inundation considering fluid-structure interactions. The book provides all necessary information for addressing these phenomena, ranging from the fundamentals of high-performance computing for finite element methods, key algorithms of accurate dynamic structural analysis, fluid flows ...

  11. 8th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Gracia, José; Knüpfer, Andreas; Resch, Michael; Nagel, Wolfgang

    2015-01-01

    Numerical simulation and modelling using High Performance Computing has evolved into an established technique in academic and industrial research. At the same time, the High Performance Computing infrastructure is becoming ever more complex. For instance, most of the current top systems around the world use thousands of nodes in which classical CPUs are combined with accelerator cards in order to enhance their compute power and energy efficiency. This complexity can only be mastered with adequate development and optimization tools. Key topics addressed by these tools include parallelization on heterogeneous systems, performance optimization for CPUs and accelerators, debugging of increasingly complex scientific applications, and optimization of energy usage in the spirit of green IT. This book represents the proceedings of the 8th International Parallel Tools Workshop, held October 1-2, 2014 in Stuttgart, Germany – which is a forum to discuss the latest advancements in the parallel tools.

  12. High performance computing and communications: Advancing the frontiers of information technology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

  13. From testbed to reality grid computing steps up a gear

    CERN Multimedia

    2004-01-01

    "UK plans for Grid computing changed gear this week. The pioneering European DataGrid (EDG) project came to a successful conclusion at the end of March, and on 1 April a new project, known as Enabling Grids for E-Science in Europe (EGEE), begins" (1 page)

  14. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    KAUST Repository

    Downes, Turlough P.

    2013-01-01

    As our understanding of the world around us increases it becomes more challenging to make use of what we already know, and to increase our understanding still further. Computational modeling and simulation have become critical tools in addressing this challenge. The requirements of high-resolution, accurate modeling have outstripped the ability of desktop computers and even small clusters to provide the necessary compute power. Many applications in the scientific and engineering domains now need very large amounts of compute time, while other applications, particularly in the life sciences, frequently have large data I/O requirements. There is thus a growing need for a range of high performance applications which can utilize parallel compute systems effectively, which have efficient data handling strategies and which have the capacity to utilise current and future systems. The High Performance and Scientific Applications topic aims to highlight recent progress in the use of advanced computing and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators, and to deal with difficult I/O requirements. © 2013 Springer-Verlag.

  15. Thinking processes used by high-performing students in a computer programming task

    Directory of Open Access Journals (Sweden)

    Marietjie Havenga

    2011-07-01

    Full Text Available Computer programmers must be able to understand programming source code and write programs that execute complex tasks to solve real-world problems. This article is a trans- disciplinary study at the intersection of computer programming, education and psychology. It outlines the role of mental processes in the process of programming and indicates how successful thinking processes can support computer science students in writing correct and well-defined programs. A mixed methods approach was used to better understand the thinking activities and programming processes of participating students. Data collection involved both computer programs and students’ reflective thinking processes recorded in their journals. This enabled analysis of psychological dimensions of participants’ thinking processes and their problem-solving activities as they considered a programming problem. Findings indicate that the cognitive, reflective and psychological processes used by high-performing programmers contributed to their success in solving a complex programming problem. Based on the thinking processes of high performers, we propose a model of integrated thinking processes, which can support computer programming students. Keywords: Computer programming, education, mixed methods research, thinking processes.  Disciplines: Computer programming, education, psychology

  16. 10th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Gracia, José; Hilbrich, Tobias; Knüpfer, Andreas; Resch, Michael; Nagel, Wolfgang

    2017-01-01

    This book presents the proceedings of the 10th International Parallel Tools Workshop, held October 4-5, 2016 in Stuttgart, Germany – a forum to discuss the latest advances in parallel tools. High-performance computing plays an increasingly important role for numerical simulation and modelling in academic and industrial research. At the same time, using large-scale parallel systems efficiently is becoming more difficult. A number of tools addressing parallel program development and analysis have emerged from the high-performance computing community over the last decade, and what may have started as collection of small helper script has now matured to production-grade frameworks. Powerful user interfaces and an extensive body of documentation allow easy usage by non-specialists.

  17. International Conference on Modern Mathematical Methods and High Performance Computing in Science and Technology

    CERN Document Server

    Srivastava, HM; Venturino, Ezio; Resch, Michael; Gupta, Vijay

    2016-01-01

    The book discusses important results in modern mathematical models and high performance computing, such as applied operations research, simulation of operations, statistical modeling and applications, invisibility regions and regular meta-materials, unmanned vehicles, modern radar techniques/SAR imaging, satellite remote sensing, coding, and robotic systems. Furthermore, it is valuable as a reference work and as a basis for further study and research. All contributing authors are respected academicians, scientists and researchers from around the globe. All the papers were presented at the international conference on Modern Mathematical Methods and High Performance Computing in Science & Technology (M3HPCST 2015), held at Raj Kumar Goel Institute of Technology, Ghaziabad, India, from 27–29 December 2015, and peer-reviewed by international experts. The conference provided an exceptional platform for leading researchers, academicians, developers, engineers and technocrats from a broad range of disciplines ...

  18. A simple grid implementation with Berkeley Open Infrastructure for Network Computing using BLAST as a model

    Directory of Open Access Journals (Sweden)

    Watthanai Pinthong

    2016-07-01

    Full Text Available Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software.

  19. Computation for LHC experiments: a worldwide computing grid

    International Nuclear Information System (INIS)

    Fairouz, Malek

    2010-01-01

    In normal operating conditions the LHC detectors are expected to record about 10 10 collisions each year. The processing of all the consequent experimental data is a real computing challenge in terms of equipment, software and organization: it requires sustaining data flows of a few 10 9 octets per second and recording capacity of a few tens of 10 15 octets each year. In order to meet this challenge a computing network implying the dispatch and share of tasks, has been set. The W-LCG grid (World wide LHC computing grid) is made up of 4 tiers. Tiers 0 is the computer center in CERN, it is responsible for collecting and recording the raw data from the LHC detectors and to dispatch it to the 11 tiers 1. The tiers 1 is typically a national center, it is responsible for making a copy of the raw data and for processing it in order to recover relevant data with a physical meaning and to transfer the results to the 150 tiers 2. The tiers 2 is at the level of the Institute or laboratory, it is in charge of the final analysis of the data and of the production of the simulations. Tiers 3 are at the level of the laboratories, they provide a complementary and local resource to tiers 2 in terms of data analysis. (A.C.)

  20. Solving Problems in Various Domains by Hybrid Models of High Performance Computations

    Directory of Open Access Journals (Sweden)

    Yurii Rogozhin

    2014-03-01

    Full Text Available This work presents a hybrid model of high performance computations. The model is based on membrane system (P~system where some membranes may contain quantum device that is triggered by the data entering the membrane. This model is supposed to take advantages of both biomolecular and quantum paradigms and to overcome some of their inherent limitations. The proposed approach is demonstrated through two selected problems: SAT, and image retrieving.

  1. Physical evaluation of prototype high-performance anti-scatter grids: potential for improved digital radiographic image quality

    International Nuclear Information System (INIS)

    Fetterly, Kenneth A; Schueler, Beth A

    2009-01-01

    Grid evaluation for a screen-film x-ray system has typically included independent measurement of the opposing contrast improvement factor and Bucky factor. Neither of these metrics, however, is appropriate when assessing grid performance in a digital imaging environment. For digital radiographic systems, the benefit of an anti-scatter grid is well characterized by the quantum signal-to-noise ratio improvement factor (K SNR ) provided by the grid. The purpose of this work was to measure K SNR of prototype grids designed for use with digital radiographic systems. The prototype grids had 5 mm tall lead septa, fiber interspace material, line rate N = 25 and 36 cm -1 and ratio r = 15 and 21, respectively. The primary and scatter transmission properties of the grids were measured, and K SNR was evaluated over a phantom thickness range of 10-50 cm. To provide a comparison, the K SNR of similarly constructed N44r15 and N80r15 grids is also reported. K SNR of the prototype grids ranged from 1.4 for the 10 cm phantom to 2.4 for the 50 cm phantom. For the thickest phantom, the SNR improvement factor of the prototype grids was 18-83% higher than that of the N44r15 and N80r15 grids, respectively. (note)

  2. High-performance computing on the Intel Xeon Phi how to fully exploit MIC architectures

    CERN Document Server

    Wang, Endong; Shen, Bo; Zhang, Guangyong; Lu, Xiaowei; Wu, Qing; Wang, Yajuan

    2014-01-01

    The aim of this book is to explain to high-performance computing (HPC) developers how to utilize the Intel® Xeon Phi™ series products efficiently. To that end, it introduces some computing grammar, programming technology and optimization methods for using many-integrated-core (MIC) platforms and also offers tips and tricks for actual use, based on the authors' first-hand optimization experience.The material is organized in three sections. The first section, "Basics of MIC", introduces the fundamentals of MIC architecture and programming, including the specific Intel MIC programming environment

  3. A Case Study on Neural Inspired Dynamic Memory Management Strategies for High Performance Computing.

    Energy Technology Data Exchange (ETDEWEB)

    Vineyard, Craig Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Verzi, Stephen Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    As high performance computing architectures pursue more computational power there is a need for increased memory capacity and bandwidth as well. A multi-level memory (MLM) architecture addresses this need by combining multiple memory types with different characteristics as varying levels of the same architecture. How to efficiently utilize this memory infrastructure is an unknown challenge, and in this research we sought to investigate whether neural inspired approaches can meaningfully help with memory management. In particular we explored neurogenesis inspired re- source allocation, and were able to show a neural inspired mixed controller policy can beneficially impact how MLM architectures utilize memory.

  4. High Performance Computing - Power Application Programming Interface Specification Version 2.0.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Levenhagen, Michael J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Olivier, Stephen Lecler [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ward, H. Lee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  5. Grids in Europe - a computing infrastructure for science

    International Nuclear Information System (INIS)

    Kranzlmueller, D.

    2008-01-01

    Grids provide sheer unlimited computing power and access to a variety of resources to todays scientists. Moving from a research topic of computer science to a commodity tool for science and research in general, grid infrastructures are built all around the world. This talk provides an overview of the developments of grids in Europe, the status of the so-called national grid initiatives as well as the efforts towards an integrated European grid infrastructure. The latter, summarized under the title of the European Grid Initiative (EGI), promises a permanent and reliable grid infrastructure and its services in a way similar to research networks today. The talk describes the status of these efforts, the plans for the setup of this pan-European e-Infrastructure, and the benefits for the application communities. (author)

  6. Techniques for grid manipulation and adaptation. [computational fluid dynamics

    Science.gov (United States)

    Choo, Yung K.; Eisemann, Peter R.; Lee, Ki D.

    1992-01-01

    Two approaches have been taken to provide systematic grid manipulation for improved grid quality. One is the control point form (CPF) of algebraic grid generation. It provides explicit control of the physical grid shape and grid spacing through the movement of the control points. It works well in the interactive computer graphics environment and hence can be a good candidate for integration with other emerging technologies. The other approach is grid adaptation using a numerical mapping between the physical space and a parametric space. Grid adaptation is achieved by modifying the mapping functions through the effects of grid control sources. The adaptation process can be repeated in a cyclic manner if satisfactory results are not achieved after a single application.

  7. GRID : unlimited computing power on your desktop Conference MT17

    CERN Multimedia

    2001-01-01

    The Computational GRID is an analogy to the electrical power grid for computing resources. It decouples the provision of computing, data, and networking from its use, it allows large-scale pooling and sharing of resources distributed world-wide. Every computer, from a desktop to a mainframe or supercomputer, can provide computing power or data for the GRID. The final objective is to plug your computer into the wall and have direct access to huge computing resources immediately, just like plugging-in a lamp to get instant light. The GRID will facilitate world-wide scientific collaborations on an unprecedented scale. It will provide transparent access to major distributed resources of computer power, data, information, and collaborations.

  8. Cloud object store for archive storage of high performance computing data using decoupling middleware

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  9. Cloud object store for checkpoints of high performance computing applications using decoupling middleware

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-04-19

    Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  10. RAPPORT: running scientific high-performance computing applications on the cloud.

    Science.gov (United States)

    Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt

    2013-01-28

    Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.

  11. On the impact of quantum computing technology on future developments in high-performance scientific computing

    OpenAIRE

    Möller, Matthias; Vuik, Cornelis

    2017-01-01

    Quantum computing technologies have become a hot topic in academia and industry receiving much attention and financial support from all sides. Building a quantum computer that can be used practically is in itself an outstanding challenge that has become the ‘new race to the moon’. Next to researchers and vendors of future computing technologies, national authorities are showing strong interest in maturing this technology due to its known potential to break many of today’s encryption technique...

  12. Computational Fluid Dynamics (CFD) Computations With Zonal Navier-Stokes Flow Solver (ZNSFLOW) Common High Performance Computing Scalable Software Initiative (CHSSI) Software

    National Research Council Canada - National Science Library

    Edge, Harris

    1999-01-01

    ...), computational fluid dynamics (CFD) 6 project. Under the project, a proven zonal Navier-Stokes solver was rewritten for scalable parallel performance on both shared memory and distributed memory high performance computers...

  13. Can We Build a Truly High Performance Computer Which is Flexible and Transparent?

    KAUST Repository

    Rojas, Jhonathan Prieto

    2013-09-10

    State-of-the art computers need high performance transistors, which consume ultra-low power resulting in longer battery lifetime. Billions of transistors are integrated neatly using matured silicon fabrication process to maintain the performance per cost advantage. In that context, low-cost mono-crystalline bulk silicon (100) based high performance transistors are considered as the heart of today\\'s computers. One limitation is silicon\\'s rigidity and brittleness. Here we show a generic batch process to convert high performance silicon electronics into flexible and semi-transparent one while retaining its performance, process compatibility, integration density and cost. We demonstrate high-k/metal gate stack based p-type metal oxide semiconductor field effect transistors on 4 inch silicon fabric released from bulk silicon (100) wafers with sub-threshold swing of 80 mV dec(-1) and on/off ratio of near 10(4) within 10% device uniformity with a minimum bending radius of 5 mm and an average transmittance of similar to 7% in the visible spectrum.

  14. The MicroGrid: A Scientific Tool for Modeling Computational Grids

    Directory of Open Access Journals (Sweden)

    H.J. Song

    2000-01-01

    Full Text Available The complexity and dynamic nature of the Internet (and the emerging Computational Grid demand that middleware and applications adapt to the changes in configuration and availability of resources. However, to the best of our knowledge there are no simulation tools which support systematic exploration of dynamic Grid software (or Grid resource behavior. We describe our vision and initial efforts to build tools to meet these needs. Our MicroGrid simulation tools enable Globus applications to be run in arbitrary virtual grid resource environments, enabling broad experimentation. We describe the design of these tools, and their validation on micro-benchmarks, the NAS parallel benchmarks, and an entire Grid application. These validation experiments show that the MicroGrid can match actual experiments within a few percent (2% to 4%.

  15. Bringing high-performance computing to the biologist's workbench: approaches, applications, and challenges

    International Nuclear Information System (INIS)

    Oehmen, C S; Cannon, W R

    2008-01-01

    Data-intensive and high-performance computing are poised to significantly impact the future of biological research which is increasingly driven by the prevalence of high-throughput experimental methodologies for genome sequencing, transcriptomics, proteomics, and other areas. Large centers such as NIH's National Center for Biotechnology Information, The Institute for Genomic Research, and the DOE's Joint Genome Institute) have made extensive use of multiprocessor architectures to deal with some of the challenges of processing, storing and curating exponentially growing genomic and proteomic datasets, thus enabling users to rapidly access a growing public data source, as well as use analysis tools transparently on high-performance computing resources. Applying this computational power to single-investigator analysis, however, often relies on users to provide their own computational resources, forcing them to endure the learning curve of porting, building, and running software on multiprocessor architectures. Solving the next generation of large-scale biology challenges using multiprocessor machines-from small clusters to emerging petascale machines-can most practically be realized if this learning curve can be minimized through a combination of workflow management, data management and resource allocation as well as intuitive interfaces and compatibility with existing common data formats

  16. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    Science.gov (United States)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte Carlo Simulation in SCEAPI and have been providing CPU power since fall 2015.

  17. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160; The ATLAS collaboration

    2016-01-01

    Fifteen Chinese High Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  18. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160

    2017-01-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  19. On the impact of quantum computing technology on future developments in high-performance scientific computing

    NARCIS (Netherlands)

    Möller, M.; Vuik, C.

    2017-01-01

    Quantum computing technologies have become a hot topic in academia and industry receiving much attention and financial support from all sides. Building a quantum computer that can be used practically is in itself an outstanding challenge that has become the ‘new race to the moon’. Next to

  20. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  1. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  2. High performance statistical computing with parallel R: applications to biology and climate modelling

    International Nuclear Information System (INIS)

    Samatova, Nagiza F; Branstetter, Marcia; Ganguly, Auroop R; Hettich, Robert; Khan, Shiraj; Kora, Guruprasad; Li, Jiangtian; Ma, Xiaosong; Pan, Chongle; Shoshani, Arie; Yoginath, Srikanth

    2006-01-01

    Ultrascale computing and high-throughput experimental technologies have enabled the production of scientific data about complex natural phenomena. With this opportunity, comes a new problem - the massive quantities of data so produced. Answers to fundamental questions about the nature of those phenomena remain largely hidden in the produced data. The goal of this work is to provide a scalable high performance statistical data analysis framework to help scientists perform interactive analyses of these raw data to extract knowledge. Towards this goal we have been developing an open source parallel statistical analysis package, called Parallel R, that lets scientists employ a wide range of statistical analysis routines on high performance shared and distributed memory architectures without having to deal with the intricacies of parallelizing these routines

  3. Removal of apparent singularity in grid computations

    International Nuclear Information System (INIS)

    Jakubovics, J.P.

    1993-01-01

    A self-consistency test for magnetic domain wall models was suggested by Aharoni. The test consists of evaluating the ratio S = var-epsilon wall /var-epsilon wall , where var-epsilon wall is the wall energy, and var-epsilon wall is the integral of a certain function of the direction cosines of the magnetization, α, β, γ over the volume occupied by the domain wall. If the computed configuration is a good approximation to one corresponding to an energy minimum, the ratio is close to 1. The integrand of var-epsilon wall contains terms that are inversely proportional to γ. Since γ passes through zero at the centre of the domain wall, these terms have a singularity at these points. The integral is finite and its evaluation does not usually present any problems when the direction cosines are known in terms of continuous functions. In many cases, significantly better results for magnetization configurations of domain walls can be obtained by computations using finite element methods. The direction cosines are then only known at a set of discrete points, and integration over the domain wall is replaced by summation over these points. Evaluation of var-epsilon wall becomes inaccurate if the terms in the summation are taken to be the values of the integrand at the grid points, because of the large contribution of points close to where γ changes sign. The self-consistency test has recently been generalised to a larger number of cases. The purpose of this paper is to suggest a method of improving the accuracy of the evaluation of integrals in such cases. Since the self-consistency test has so far only been applied to two-dimensional magnetization configurations, the problem and its solution will be presented for that specific case. Generalisation to three or more dimensions is straight forward

  4. Soft Computing Techniques for the Protein Folding Problem on High Performance Computing Architectures.

    Science.gov (United States)

    Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M

    2016-01-01

    The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.

  5. Grid Computing Making the Global Infrastructure a Reality

    CERN Document Server

    Fox, Geoffrey C; Hey, Anthony J G

    2003-01-01

    Grid computing is applying the resources of many computers in a network to a single problem at the same time Grid computing appears to be a promising trend for three reasons: (1) Its ability to make more cost-effective use of a given amount of computer resources, (2) As a way to solve problems that can't be approached without an enormous amount of computing power (3) Because it suggests that the resources of many computers can be cooperatively and perhaps synergistically harnessed and managed as a collaboration toward a common objective. A number of corporations, professional groups, university consortiums, and other groups have developed or are developing frameworks and software for managing grid computing projects. The European Community (EU) is sponsoring a project for a grid for high-energy physics, earth observation, and biology applications. In the United States, the National Technology Grid is prototyping a computational grid for infrastructure and an access grid for people. Sun Microsystems offers Gri...

  6. HIGH-PERFORMANCE COMPUTING FOR THE STUDY OF EARTH AND ENVIRONMENTAL SCIENCE MATERIALS USING SYNCHROTRON X-RAY COMPUTED MICROTOMOGRAPHY

    International Nuclear Information System (INIS)

    FENG, H.; JONES, K.W.; MCGUIGAN, M.; SMITH, G.J.; SPILETIC, J.

    2001-01-01

    Synchrotron x-ray computed microtomography (CMT) is a non-destructive method for examination of rock, soil, and other types of samples studied in the earth and environmental sciences. The high x-ray intensities of the synchrotron source make possible the acquisition of tomographic volumes at a high rate that requires the application of high-performance computing techniques for data reconstruction to produce the three-dimensional volumes, for their visualization, and for data analysis. These problems are exacerbated by the need to share information between collaborators at widely separated locations over both local and tide-area networks. A summary of the CMT technique and examples of applications are given here together with a discussion of the applications of high-performance computing methods to improve the experimental techniques and analysis of the data

  7. HIGH-PERFORMANCE COMPUTING FOR THE STUDY OF EARTH AND ENVIRONMENTAL SCIENCE MATERIALS USING SYNCHROTRON X-RAY COMPUTED MICROTOMOGRAPHY.

    Energy Technology Data Exchange (ETDEWEB)

    FENG,H.; JONES,K.W.; MCGUIGAN,M.; SMITH,G.J.; SPILETIC,J.

    2001-10-12

    Synchrotron x-ray computed microtomography (CMT) is a non-destructive method for examination of rock, soil, and other types of samples studied in the earth and environmental sciences. The high x-ray intensities of the synchrotron source make possible the acquisition of tomographic volumes at a high rate that requires the application of high-performance computing techniques for data reconstruction to produce the three-dimensional volumes, for their visualization, and for data analysis. These problems are exacerbated by the need to share information between collaborators at widely separated locations over both local and tide-area networks. A summary of the CMT technique and examples of applications are given here together with a discussion of the applications of high-performance computing methods to improve the experimental techniques and analysis of the data.

  8. High performance simulation for the Silva project using the tera computer

    Energy Technology Data Exchange (ETDEWEB)

    Bergeaud, V.; La Hargue, J.P.; Mougery, F. [CS Communication and Systemes, 92 - Clamart (France); Boulet, M.; Scheurer, B. [CEA Bruyeres-le-Chatel, 91 - Bruyeres-le-Chatel (France); Le Fur, J.F.; Comte, M.; Benisti, D.; Lamare, J. de; Petit, A. [CEA Saclay, 91 - Gif sur Yvette (France)

    2003-07-01

    In the context of the SILVA Project (Atomic Vapor Laser Isotope Separation), numerical simulation of the plant scale propagation of laser beams through uranium vapour was a great challenge. The PRODIGE code has been developed to achieve this goal. Here we focus on the task of achieving high performance simulation on the TERA computer. We describe the main issues for optimizing the parallelization of the PRODIGE code on TERA. Thus, we discuss advantages and drawbacks of the implemented diagonal parallelization scheme. As a consequence, it has been found fruitful to fit out the code in three aspects: memory allocation, MPI communications and interconnection network bandwidth usage. We stress out the interest of MPI/IO in this context and the benefit obtained for production computations on TERA. Finally, we shall illustrate our developments. We indicate some performance measurements reflecting the good parallelization properties of PRODIGE on the TERA computer. The code is currently used for demonstrating the feasibility of the laser propagation at a plant enrichment level and for preparing the 2003 Menphis experiment. We conclude by emphasizing the contribution of high performance TERA simulation to the project. (authors)

  9. High performance simulation for the Silva project using the tera computer

    International Nuclear Information System (INIS)

    Bergeaud, V.; La Hargue, J.P.; Mougery, F.; Boulet, M.; Scheurer, B.; Le Fur, J.F.; Comte, M.; Benisti, D.; Lamare, J. de; Petit, A.

    2003-01-01

    In the context of the SILVA Project (Atomic Vapor Laser Isotope Separation), numerical simulation of the plant scale propagation of laser beams through uranium vapour was a great challenge. The PRODIGE code has been developed to achieve this goal. Here we focus on the task of achieving high performance simulation on the TERA computer. We describe the main issues for optimizing the parallelization of the PRODIGE code on TERA. Thus, we discuss advantages and drawbacks of the implemented diagonal parallelization scheme. As a consequence, it has been found fruitful to fit out the code in three aspects: memory allocation, MPI communications and interconnection network bandwidth usage. We stress out the interest of MPI/IO in this context and the benefit obtained for production computations on TERA. Finally, we shall illustrate our developments. We indicate some performance measurements reflecting the good parallelization properties of PRODIGE on the TERA computer. The code is currently used for demonstrating the feasibility of the laser propagation at a plant enrichment level and for preparing the 2003 Menphis experiment. We conclude by emphasizing the contribution of high performance TERA simulation to the project. (authors)

  10. DEISA2: supporting and developing a European high-performance computing ecosystem

    International Nuclear Information System (INIS)

    Lederer, H

    2008-01-01

    The DEISA Consortium has deployed and operated the Distributed European Infrastructure for Supercomputing Applications. Through the EU FP7 DEISA2 project (funded for three years as of May 2008), the consortium is continuing to support and enhance the distributed high-performance computing infrastructure and its activities and services relevant for applications enabling, operation, and technologies, as these are indispensable for the effective support of computational sciences for high-performance computing (HPC). The service-provisioning model will be extended from one that supports single projects to one supporting virtual European communities. Collaborative activities will also be carried out with new European and other international initiatives. Of strategic importance is cooperation with the PRACE project, which is preparing for the installation of a limited number of leadership-class Tier-0 supercomputers in Europe. The key role and aim of DEISA will be to deliver a turnkey operational solution for a persistent European HPC ecosystem that will integrate national Tier-1 centers and the new Tier-0 centers

  11. LHCb Distributed Data Analysis on the Computing Grid

    CERN Document Server

    Paterson, S; Parkes, C

    2006-01-01

    LHCb is one of the four Large Hadron Collider (LHC) experiments based at CERN, the European Organisation for Nuclear Research. The LHC experiments will start taking an unprecedented amount of data when they come online in 2007. Since no single institute has the compute resources to handle this data, resources must be pooled to form the Grid. Where the Internet has made it possible to share information stored on computers across the world, Grid computing aims to provide access to computing power and storage capacity on geographically distributed systems. LHCb software applications must work seamlessly on the Grid allowing users to efficiently access distributed compute resources. It is essential to the success of the LHCb experiment that physicists can access data from the detector, stored in many heterogeneous systems, to perform distributed data analysis. This thesis describes the work performed to enable distributed data analysis for the LHCb experiment on the LHC Computing Grid.

  12. High Performance Computing and Storage Requirements for Nuclear Physics: Target 2017

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wasserman, Harvey [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-04-30

    In April 2014, NERSC, ASCR, and the DOE Office of Nuclear Physics (NP) held a review to characterize high performance computing (HPC) and storage requirements for NP research through 2017. This review is the 12th in a series of reviews held by NERSC and Office of Science program offices that began in 2009. It is the second for NP, and the final in the second round of reviews that covered the six Office of Science program offices. This report is the result of that review

  13. Interactive Data Exploration for High-Performance Fluid Flow Computations through Porous Media

    KAUST Repository

    Perovic, Nevena

    2014-09-01

    © 2014 IEEE. Huge data advent in high-performance computing (HPC) applications such as fluid flow simulations usually hinders the interactive processing and exploration of simulation results. Such an interactive data exploration not only allows scientiest to \\'play\\' with their data but also to visualise huge (distributed) data sets in both an efficient and easy way. Therefore, we propose an HPC data exploration service based on a sliding window concept, that enables researches to access remote data (available on a supercomputer or cluster) during simulation runtime without exceeding any bandwidth limitations between the HPC back-end and the user front-end.

  14. Commercialization issues and funding opportunities for high-performance optoelectronic computing modules

    Science.gov (United States)

    Hessenbruch, John M.; Guilfoyle, Peter S.

    1997-01-01

    Low power, optoelectronic integrated circuits are being developed for high speed switching and data processing applications. These high performance optoelectronic computing modules consist of three primary components: vertical cavity surface emitting lasers, diffractive optical interconnect elements, and detector/amplifier/laser driver arrays. Following the design and fabrication of an HPOC module prototype, selected commercial funding sources will be evaluated to support a product development stage. These include the formation of a strategic alliance with one or more microprocessor or telecommunications vendors, and/or equity investment from one or more venture capital firms.

  15. Failure detection in high-performance clusters and computers using chaotic map computations

    Science.gov (United States)

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  16. High-performance simulation-based algorithms for an alpine ski racer’s trajectory optimization in heterogeneous computer systems

    Directory of Open Access Journals (Sweden)

    Dębski Roman

    2014-09-01

    Full Text Available Effective, simulation-based trajectory optimization algorithms adapted to heterogeneous computers are studied with reference to the problem taken from alpine ski racing (the presented solution is probably the most general one published so far. The key idea behind these algorithms is to use a grid-based discretization scheme to transform the continuous optimization problem into a search problem over a specially constructed finite graph, and then to apply dynamic programming to find an approximation of the global solution. In the analyzed example it is the minimum-time ski line, represented as a piecewise-linear function (a method of elimination of unfeasible solutions is proposed. Serial and parallel versions of the basic optimization algorithm are presented in detail (pseudo-code, time and memory complexity. Possible extensions of the basic algorithm are also described. The implementation of these algorithms is based on OpenCL. The included experimental results show that contemporary heterogeneous computers can be treated as μ-HPC platforms-they offer high performance (the best speedup was equal to 128 while remaining energy and cost efficient (which is crucial in embedded systems, e.g., trajectory planners of autonomous robots. The presented algorithms can be applied to many trajectory optimization problems, including those having a black-box represented performance measure

  17. ATLAS grid compute cluster with virtualized service nodes

    International Nuclear Information System (INIS)

    Mejia, J; Stonjek, S; Kluth, S

    2010-01-01

    The ATLAS Computing Grid consists of several hundred compute clusters distributed around the world as part of the Worldwide LHC Computing Grid (WLCG). The Grid middleware and the ATLAS software which has to be installed on each site, often require a certain Linux distribution and sometimes even specific version thereof. On the other hand, mostly due to maintenance reasons, computer centres install the same operating system and version on all computers. This might lead to problems with the Grid middleware if the local version is different from the one for which it has been developed. At RZG we partly solved this conflict by using virtualization technology for the service nodes. We will present the setup used at RZG and show how it helped to solve the problems described above. In addition we will illustrate the additional advantages gained by the above setup.

  18. Challenges and opportunities of modeling plasma–surface interactions in tungsten using high-performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Wirth, Brian D., E-mail: bdwirth@utk.edu [Department of Nuclear Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Nuclear Science and Engineering Directorate, Oak Ridge National Laboratory, Oak Ridge, TN (United States); Hammond, K.D. [Department of Nuclear Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Krasheninnikov, S.I. [University of California, San Diego, La Jolla, CA (United States); Maroudas, D. [University of Massachusetts, Amherst, Amherst, MA 01003 (United States)

    2015-08-15

    The performance of plasma facing components (PFCs) is critical for ITER and future magnetic fusion reactors. The ITER divertor will be tungsten, which is the primary candidate material for future reactors. Recent experiments involving tungsten exposure to low-energy helium plasmas reveal significant surface modification, including the growth of nanometer-scale tendrils of “fuzz” and formation of nanometer-sized bubbles in the near-surface region. The large span of spatial and temporal scales governing plasma surface interactions are among the challenges to modeling divertor performance. Fortunately, recent innovations in computational modeling, increasingly powerful high-performance computers, and improved experimental characterization tools provide a path toward self-consistent, experimentally validated models of PFC and divertor performance. Recent advances in understanding tungsten–helium interactions are reviewed, including such processes as helium clustering, which serve as nuclei for gas bubbles; and trap mutation, dislocation loop punching and bubble bursting; which together initiate surface morphological modification.

  19. High Performance Computing - Power Application Programming Interface Specification Version 1.4

    Energy Technology Data Exchange (ETDEWEB)

    Laros III, James H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); DeBonis, David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kelly, Suzanne M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Levenhagen, Michael J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Olivier, Stephen Lecler [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  20. Use of high performance computing to examine the effectiveness of aquifer remediation

    International Nuclear Information System (INIS)

    Tompson, A.F.B.; Ashby, S.F.; Falgout, R.D.; Smith, S.G.; Fogwell, T.W.; Loosmore, G.A.

    1994-06-01

    Large-scale simulation of fluid flow and chemical migration is being used to study the effectiveness of pump-and-treat restoration of a contaminated, saturated aquifer. A three-element approach focusing on geostatistical representations of heterogeneous aquifers, high-performance computing strategies for simulating flow, migration, and reaction processes in large three-dimensional systems, and highly-resolved simulations of flow and chemical migration in porous formations will be discussed. Results from a preliminary application of this approach to examine pumping behavior at a real, heterogeneous field site will be presented. Future activities will emphasize parallel computations in larger, dynamic, and nonlinear (two-phase) flow problems as well as improved interpretive methods for defining detailed material property distributions

  1. Polymer waveguides for electro-optical integration in data centers and high-performance computers.

    Science.gov (United States)

    Dangel, Roger; Hofrichter, Jens; Horst, Folkert; Jubin, Daniel; La Porta, Antonio; Meier, Norbert; Soganci, Ibrahim Murat; Weiss, Jonas; Offrein, Bert Jan

    2015-02-23

    To satisfy the intra- and inter-system bandwidth requirements of future data centers and high-performance computers, low-cost low-power high-throughput optical interconnects will become a key enabling technology. To tightly integrate optics with the computing hardware, particularly in the context of CMOS-compatible silicon photonics, optical printed circuit boards using polymer waveguides are considered as a formidable platform. IBM Research has already demonstrated the essential silicon photonics and interconnection building blocks. A remaining challenge is electro-optical packaging, i.e., the connection of the silicon photonics chips with the system. In this paper, we present a new single-mode polymer waveguide technology and a scalable method for building the optical interface between silicon photonics chips and single-mode polymer waveguides.

  2. Analysis and modeling of social influence in high performance computing workloads

    KAUST Repository

    Zheng, Shuai

    2011-01-01

    Social influence among users (e.g., collaboration on a project) creates bursty behavior in the underlying high performance computing (HPC) workloads. Using representative HPC and cluster workload logs, this paper identifies, analyzes, and quantifies the level of social influence across HPC users. We show the existence of a social graph that is characterized by a pattern of dominant users and followers. This pattern also follows a power-law distribution, which is consistent with those observed in mainstream social networks. Given its potential impact on HPC workloads prediction and scheduling, we propose a fast-converging, computationally-efficient online learning algorithm for identifying social groups. Extensive evaluation shows that our online algorithm can (1) quickly identify the social relationships by using a small portion of incoming jobs and (2) can efficiently track group evolution over time. © 2011 Springer-Verlag.

  3. Parallel Backprojection: A Case Study in High-Performance Reconfigurable Computing

    Directory of Open Access Journals (Sweden)

    Cordes Ben

    2009-01-01

    Full Text Available High-performance reconfigurable computing (HPRC is a novel approach to provide large-scale computing power to modern scientific applications. Using both general-purpose processors and FPGAs allows application designers to exploit fine-grained and coarse-grained parallelism, achieving high degrees of speedup. One scientific application that benefits from this technique is backprojection, an image formation algorithm that can be used as part of a synthetic aperture radar (SAR processing system. We present an implementation of backprojection for SAR on an HPRC system. Using simulated data taken at a variety of ranges, our implementation runs over 200 times faster than a similar software program, with an overall application speedup better than 50x. The backprojection application is easily parallelizable, achieving near-linear speedup when run on multiple nodes of a clustered HPRC system. The results presented can be applied to other systems and other algorithms with similar characteristics.

  4. Parallel Backprojection: A Case Study in High-Performance Reconfigurable Computing

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available High-performance reconfigurable computing (HPRC is a novel approach to provide large-scale computing power to modern scientific applications. Using both general-purpose processors and FPGAs allows application designers to exploit fine-grained and coarse-grained parallelism, achieving high degrees of speedup. One scientific application that benefits from this technique is backprojection, an image formation algorithm that can be used as part of a synthetic aperture radar (SAR processing system. We present an implementation of backprojection for SAR on an HPRC system. Using simulated data taken at a variety of ranges, our implementation runs over 200 times faster than a similar software program, with an overall application speedup better than 50x. The backprojection application is easily parallelizable, achieving near-linear speedup when run on multiple nodes of a clustered HPRC system. The results presented can be applied to other systems and other algorithms with similar characteristics.

  5. Challenges and opportunities of modeling plasma–surface interactions in tungsten using high-performance computing

    International Nuclear Information System (INIS)

    Wirth, Brian D.; Hammond, K.D.; Krasheninnikov, S.I.; Maroudas, D.

    2015-01-01

    The performance of plasma facing components (PFCs) is critical for ITER and future magnetic fusion reactors. The ITER divertor will be tungsten, which is the primary candidate material for future reactors. Recent experiments involving tungsten exposure to low-energy helium plasmas reveal significant surface modification, including the growth of nanometer-scale tendrils of “fuzz” and formation of nanometer-sized bubbles in the near-surface region. The large span of spatial and temporal scales governing plasma surface interactions are among the challenges to modeling divertor performance. Fortunately, recent innovations in computational modeling, increasingly powerful high-performance computers, and improved experimental characterization tools provide a path toward self-consistent, experimentally validated models of PFC and divertor performance. Recent advances in understanding tungsten–helium interactions are reviewed, including such processes as helium clustering, which serve as nuclei for gas bubbles; and trap mutation, dislocation loop punching and bubble bursting; which together initiate surface morphological modification

  6. 9th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Hilbrich, Tobias; Niethammer, Christoph; Gracia, José; Nagel, Wolfgang; Resch, Michael

    2016-01-01

    High Performance Computing (HPC) remains a driver that offers huge potentials and benefits for science and society. However, a profound understanding of the computational matters and specialized software is needed to arrive at effective and efficient simulations. Dedicated software tools are important parts of the HPC software landscape, and support application developers. Even though a tool is by definition not a part of an application, but rather a supplemental piece of software, it can make a fundamental difference during the development of an application. Such tools aid application developers in the context of debugging, performance analysis, and code optimization, and therefore make a major contribution to the development of robust and efficient parallel software. This book introduces a selection of the tools presented and discussed at the 9th International Parallel Tools Workshop held in Dresden, Germany, September 2-3, 2015, which offered an established forum for discussing the latest advances in paral...

  7. Acceleration of FDTD mode solver by high-performance computing techniques.

    Science.gov (United States)

    Han, Lin; Xi, Yanping; Huang, Wei-Ping

    2010-06-21

    A two-dimensional (2D) compact finite-difference time-domain (FDTD) mode solver is developed based on wave equation formalism in combination with the matrix pencil method (MPM). The method is validated for calculation of both real guided and complex leaky modes of typical optical waveguides against the bench-mark finite-difference (FD) eigen mode solver. By taking advantage of the inherent parallel nature of the FDTD algorithm, the mode solver is implemented on graphics processing units (GPUs) using the compute unified device architecture (CUDA). It is demonstrated that the high-performance computing technique leads to significant acceleration of the FDTD mode solver with more than 30 times improvement in computational efficiency in comparison with the conventional FDTD mode solver running on CPU of a standard desktop computer. The computational efficiency of the accelerated FDTD method is in the same order of magnitude of the standard finite-difference eigen mode solver and yet require much less memory (e.g., less than 10%). Therefore, the new method may serve as an efficient, accurate and robust tool for mode calculation of optical waveguides even when the conventional eigen value mode solvers are no longer applicable due to memory limitation.

  8. Using high performance interconnects in a distributed computing and mass storage environment

    International Nuclear Information System (INIS)

    Ernst, M.

    1994-01-01

    Detector Collaborations of the HERA Experiments typically involve more than 500 physicists from a few dozen institutes. These physicists require access to large amounts of data in a fully transparent manner. Important issues include Distributed Mass Storage Management Systems in a Distributed and Heterogeneous Computing Environment. At the very center of a distributed system, including tens of CPUs and network attached mass storage peripherals are the communication links. Today scientists are witnessing an integration of computing and communication technology with the open-quote network close-quote becoming the computer. This contribution reports on a centrally operated computing facility for the HERA Experiments at DESY, including Symmetric Multiprocessor Machines (84 Processors), presently more than 400 GByte of magnetic disk and 40 TB of automoted tape storage, tied together by a HIPPI open-quote network close-quote. Focussing on the High Performance Interconnect technology, details will be provided about the HIPPI based open-quote Backplane close-quote configured around a 20 Gigabit/s Multi Media Router and the performance and efficiency of the related computer interfaces

  9. Human and Robotic Space Mission Use Cases for High-Performance Spaceflight Computing

    Science.gov (United States)

    Some, Raphael; Doyle, Richard; Bergman, Larry; Whitaker, William; Powell, Wesley; Johnson, Michael; Goforth, Montgomery; Lowry, Michael

    2013-01-01

    Spaceflight computing is a key resource in NASA space missions and a core determining factor of spacecraft capability, with ripple effects throughout the spacecraft, end-to-end system, and mission. Onboard computing can be aptly viewed as a "technology multiplier" in that advances provide direct dramatic improvements in flight functions and capabilities across the NASA mission classes, and enable new flight capabilities and mission scenarios, increasing science and exploration return. Space-qualified computing technology, however, has not advanced significantly in well over ten years and the current state of the practice fails to meet the near- to mid-term needs of NASA missions. Recognizing this gap, the NASA Game Changing Development Program (GCDP), under the auspices of the NASA Space Technology Mission Directorate, commissioned a study on space-based computing needs, looking out 15-20 years. The study resulted in a recommendation to pursue high-performance spaceflight computing (HPSC) for next-generation missions, and a decision to partner with the Air Force Research Lab (AFRL) in this development.

  10. The Centre of High-Performance Scientific Computing, Geoverbund, ABC/J - Geosciences enabled by HPSC

    Science.gov (United States)

    Kollet, Stefan; Görgen, Klaus; Vereecken, Harry; Gasper, Fabian; Hendricks-Franssen, Harrie-Jan; Keune, Jessica; Kulkarni, Ketan; Kurtz, Wolfgang; Sharples, Wendy; Shrestha, Prabhakar; Simmer, Clemens; Sulis, Mauro; Vanderborght, Jan

    2016-04-01

    The Centre of High-Performance Scientific Computing (HPSC TerrSys) was founded 2011 to establish a centre of competence in high-performance scientific computing in terrestrial systems and the geosciences enabling fundamental and applied geoscientific research in the Geoverbund ABC/J (geoscientfic research alliance of the Universities of Aachen, Cologne, Bonn and the Research Centre Jülich, Germany). The specific goals of HPSC TerrSys are to achieve relevance at the national and international level in (i) the development and application of HPSC technologies in the geoscientific community; (ii) student education; (iii) HPSC services and support also to the wider geoscientific community; and in (iv) the industry and public sectors via e.g., useful applications and data products. A key feature of HPSC TerrSys is the Simulation Laboratory Terrestrial Systems, which is located at the Jülich Supercomputing Centre (JSC) and provides extensive capabilities with respect to porting, profiling, tuning and performance monitoring of geoscientific software in JSC's supercomputing environment. We will present a summary of success stories of HPSC applications including integrated terrestrial model development, parallel profiling and its application from watersheds to the continent; massively parallel data assimilation using physics-based models and ensemble methods; quasi-operational terrestrial water and energy monitoring; and convection permitting climate simulations over Europe. The success stories stress the need for a formalized education of students in the application of HPSC technologies in future.

  11. Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces

    International Nuclear Information System (INIS)

    Brown, W. Michael; Wang, Peng; Plimpton, Steven J.; Tharrington, Arnold N.

    2011-01-01

    The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - (1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory, (2) minimizing the amount of code that must be ported for efficient acceleration, (3) utilizing the available processing power from both many-core CPUs and accelerators, and (4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.

  12. Grid Computing Das wahre Web 2.0?

    CERN Document Server

    2008-01-01

    'Grid-Computing ist eine Fortentwicklung des World Wide Web, sozusagen die nchste Generation', sagte (1) Franz-Josef Pfreundt (Fraunhofer-Institut fr Techno- und Wirtschaftsmathematik) schon auf der CeBIT 2003 und verwies auf die NASA als Grid-Avantgarde.

  13. Colgate one of first to build global computing grid

    CERN Multimedia

    Magno, L

    2003-01-01

    "Colgate-Palmolive Co. has become one of the first organizations in the world to build an enterprise network based on the grid computing concept. Since mid-August, the consumer products firm has been working to connect approximately 50 geographically dispersed Unix servers and storage devices in an enterprise grid network" (1 page).

  14. PNNL supercomputer to become largest computing resource on the Grid

    CERN Multimedia

    2002-01-01

    Hewlett Packard announced that the US DOE Pacific Northwest National Laboratory will connect a 9.3-teraflop HP supercomputer to the DOE Science Grid. This will be the largest supercomputer attached to a computer grid anywhere in the world (1 page).

  15. The Experiment Method for Manufacturing Grid Development on Single Computer

    Institute of Scientific and Technical Information of China (English)

    XIAO Youan; ZHOU Zude

    2006-01-01

    In this paper, an experiment method for the Manufacturing Grid application system development in the single personal computer environment is proposed. The characteristic of the proposed method is constructing a full prototype Manufacturing Grid application system which is hosted on a single personal computer with the virtual machine technology. Firstly, it builds all the Manufacturing Grid physical resource nodes on an abstraction layer of a single personal computer with the virtual machine technology. Secondly, all the virtual Manufacturing Grid resource nodes will be connected with virtual network and the application software will be deployed on each Manufacturing Grid nodes. Then, we can obtain a prototype Manufacturing Grid application system which is working in the single personal computer, and can carry on the experiment on this foundation. Compared with the known experiment methods for the Manufacturing Grid application system development, the proposed method has the advantages of the known methods, such as cost inexpensively, operation simple, and can get the confidence experiment result easily. The Manufacturing Grid application system constructed with the proposed method has the high scalability, stability and reliability. It is can be migrated to the real application environment rapidly.

  16. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Bland, Arthur S Buddy [ORNL; Boudwin, Kathlyn J. [ORNL; Hack, James J [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL; Hudson, Douglas L [ORNL

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation

  17. Grid computing : enabling a vision for collaborative research

    International Nuclear Information System (INIS)

    von Laszewski, G.

    2002-01-01

    In this paper the authors provide a motivation for Grid computing based on a vision to enable a collaborative research environment. The authors vision goes beyond the connection of hardware resources. They argue that with an infrastructure such as the Grid, new modalities for collaborative research are enabled. They provide an overview showing why Grid research is difficult, and they present a number of management-related issues that must be addressed to make Grids a reality. They list projects that provide solutions to subsets of these issues

  18. Fault tolerance in computational grids: perspectives, challenges, and issues.

    Science.gov (United States)

    Haider, Sajjad; Nazir, Babar

    2016-01-01

    Computational grids are established with the intention of providing shared access to hardware and software based resources with special reference to increased computational capabilities. Fault tolerance is one of the most important issues faced by the computational grids. The main contribution of this survey is the creation of an extended classification of problems that incur in the computational grid environments. The proposed classification will help researchers, developers, and maintainers of grids to understand the types of issues to be anticipated. Moreover, different types of problems, such as omission, interaction, and timing related have been identified that need to be handled on various layers of the computational grid. In this survey, an analysis and examination is also performed pertaining to the fault tolerance and fault detection mechanisms. Our conclusion is that a dependable and reliable grid can only be established when more emphasis is on fault identification. Moreover, our survey reveals that adaptive and intelligent fault identification, and tolerance techniques can improve the dependability of grid working environments.

  19. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and

  20. Application of High Performance Computing to Earthquake Hazard and Disaster Estimation in Urban Area

    Directory of Open Access Journals (Sweden)

    Muneo Hori

    2018-02-01

    Full Text Available Integrated earthquake simulation (IES is a seamless simulation of analyzing all processes of earthquake hazard and disaster. There are two difficulties in carrying out IES, namely, the requirement of large-scale computation and the requirement of numerous analysis models for structures in an urban area, and they are solved by taking advantage of high performance computing (HPC and by developing a system of automated model construction. HPC is a key element in developing IES, as it needs to analyze wave propagation and amplification processes in an underground structure; a model of high fidelity for the underground structure exceeds a degree-of-freedom larger than 100 billion. Examples of IES for Tokyo Metropolis are presented; the numerical computation is made by using K computer, the supercomputer of Japan. The estimation of earthquake hazard and disaster for a given earthquake scenario is made by the ground motion simulation and the urban area seismic response simulation, respectively, for the target area of 10,000 m × 10,000 m.

  1. A Framework for Debugging Geoscience Projects in a High Performance Computing Environment

    Science.gov (United States)

    Baxter, C.; Matott, L.

    2012-12-01

    High performance computing (HPC) infrastructure has become ubiquitous in today's world with the emergence of commercial cloud computing and academic supercomputing centers. Teams of geoscientists, hydrologists and engineers can take advantage of this infrastructure to undertake large research projects - for example, linking one or more site-specific environmental models with soft computing algorithms, such as heuristic global search procedures, to perform parameter estimation and predictive uncertainty analysis, and/or design least-cost remediation systems. However, the size, complexity and distributed nature of these projects can make identifying failures in the associated numerical experiments using conventional ad-hoc approaches both time- consuming and ineffective. To address these problems a multi-tiered debugging framework has been developed. The framework allows for quickly isolating and remedying a number of potential experimental failures, including: failures in the HPC scheduler; bugs in the soft computing code; bugs in the modeling code; and permissions and access control errors. The utility of the framework is demonstrated via application to a series of over 200,000 numerical experiments involving a suite of 5 heuristic global search algorithms and 15 mathematical test functions serving as cheap analogues for the simulation-based optimization of pump-and-treat subsurface remediation systems.

  2. High-performance computational fluid dynamics: a custom-code approach

    International Nuclear Information System (INIS)

    Fannon, James; Náraigh, Lennon Ó; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain

    2016-01-01

    We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier–Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing. (paper)

  3. High-performance computational fluid dynamics: a custom-code approach

    Science.gov (United States)

    Fannon, James; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain; Náraigh, Lennon Ó.

    2016-07-01

    We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier-Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing.

  4. Security Implications of Typical Grid Computing Usage Scenarios

    International Nuclear Information System (INIS)

    Humphrey, Marty; Thompson, Mary R.

    2001-01-01

    A Computational Grid is a collection of heterogeneous computers and resources spread across multiple administrative domains with the intent of providing users uniform access to these resources. There are many ways to access the resources of a Computational Grid, each with unique security requirements and implications for both the resource user and the resource provider. A comprehensive set of Grid usage scenarios are presented and analyzed with regard to security requirements such as authentication, authorization, integrity, and confidentiality. The main value of these scenarios and the associated security discussions are to provide a library of situations against which an application designer can match, thereby facilitating security-aware application use and development from the initial stages of the application design and invocation. A broader goal of these scenarios are to increase the awareness of security issues in Grid Computing

  5. Security Implications of Typical Grid Computing Usage Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Humphrey, Marty; Thompson, Mary R.

    2001-06-05

    A Computational Grid is a collection of heterogeneous computers and resources spread across multiple administrative domains with the intent of providing users uniform access to these resources. There are many ways to access the resources of a Computational Grid, each with unique security requirements and implications for both the resource user and the resource provider. A comprehensive set of Grid usage scenarios are presented and analyzed with regard to security requirements such as authentication, authorization, integrity, and confidentiality. The main value of these scenarios and the associated security discussions are to provide a library of situations against which an application designer can match, thereby facilitating security-aware application use and development from the initial stages of the application design and invocation. A broader goal of these scenarios are to increase the awareness of security issues in Grid Computing.

  6. Taiwan links up to world's first LHC computing grid project

    CERN Multimedia

    2003-01-01

    "Taiwan's Academia Sinica was linked up to the Large Hadron Collider (LHC) Computing Grid Project last week to work jointly with 12 other countries to construct the world's largest and most powerful particle accelerator" (1/2 page).

  7. Performance Evaluation of a Mobile Wireless Computational Grid ...

    African Journals Online (AJOL)

    PROF. OLIVER OSUAGWA

    2015-12-01

    Dec 1, 2015 ... Abstract. This work developed and simulated a mathematical model for a mobile wireless computational Grid ... which mobile modes will process the tasks .... evaluation are analytical modelling, simulation ... MATLAB 7.10.0.

  8. Optimal usage of computing grid network in the fields of nuclear fusion computing task

    International Nuclear Information System (INIS)

    Tenev, D.

    2006-01-01

    Nowadays the nuclear power becomes the main source of energy. To make its usage more efficient, the scientists created complicated simulation models, which require powerful computers. The grid computing is the answer to powerful and accessible computing resources. The article observes, and estimates the optimal configuration of the grid environment in the fields of the complicated nuclear fusion computing tasks. (author)

  9. BEAGLE: an application programming interface and high-performance computing library for statistical phylogenetics.

    Science.gov (United States)

    Ayres, Daniel L; Darling, Aaron; Zwickl, Derrick J; Beerli, Peter; Holder, Mark T; Lewis, Paul O; Huelsenbeck, John P; Ronquist, Fredrik; Swofford, David L; Cummings, Michael P; Rambaut, Andrew; Suchard, Marc A

    2012-01-01

    Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software.

  10. High Performance Reduced Order Models for Wind Turbines with Full-Scale Converters Applied on Grid Interconnection Studies

    DEFF Research Database (Denmark)

    Pereira, Heverton A.; F. Cupertino, Allan; Teodorescu, Remus

    2014-01-01

    Wind power has achieved technological evolution, and Grid Code (GC) requirements forced wind industry consolidation in the last three decades. However, more studies are necessary to understand how the dynamics inherent in this energy source interact with the power system. Traditional energy...... of Absolute Error (NIAE). Models are analyzed during wind speed variations and balanced voltage dip. During faults, WPPs must be able to supply reactive power to the grid, and this characteristic is analyzed. Using the proposed performance index, it is possible to conclude if a reduced order model is suitable...

  11. CMS on the GRID: Toward a fully distributed computing architecture

    International Nuclear Information System (INIS)

    Innocente, Vincenzo

    2003-01-01

    The computing systems required to collect, analyse and store the physics data at LHC would need to be distributed and global in scope. CMS is actively involved in several grid-related projects to develop and deploy a fully distributed computing architecture. We present here recent developments of tools for automating job submission and for serving data to remote analysis stations. Plans for further test and deployment of a production grid are also described

  12. The 20 Tera flop Erasmus Computing Grid (ECG).

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing

  13. The 20 Tera flop Erasmus Computing Grid (ECG)

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2009-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing

  14. Low-cost, high-performance and efficiency computational photometer design

    Science.gov (United States)

    Siewert, Sam B.; Shihadeh, Jeries; Myers, Randall; Khandhar, Jay; Ivanov, Vitaly

    2014-05-01

    Researchers at the University of Alaska Anchorage and University of Colorado Boulder have built a low cost high performance and efficiency drop-in-place Computational Photometer (CP) to test in field applications ranging from port security and safety monitoring to environmental compliance monitoring and surveying. The CP integrates off-the-shelf visible spectrum cameras with near to long wavelength infrared detectors and high resolution digital snapshots in a single device. The proof of concept combines three or more detectors into a single multichannel imaging system that can time correlate read-out, capture, and image process all of the channels concurrently with high performance and energy efficiency. The dual-channel continuous read-out is combined with a third high definition digital snapshot capability and has been designed using an FPGA (Field Programmable Gate Array) to capture, decimate, down-convert, re-encode, and transform images from two standard definition CCD (Charge Coupled Device) cameras at 30Hz. The continuous stereo vision can be time correlated to megapixel high definition snapshots. This proof of concept has been fabricated as a fourlayer PCB (Printed Circuit Board) suitable for use in education and research for low cost high efficiency field monitoring applications that need multispectral and three dimensional imaging capabilities. Initial testing is in progress and includes field testing in ports, potential test flights in un-manned aerial systems, and future planned missions to image harsh environments in the arctic including volcanic plumes, ice formation, and arctic marine life.

  15. Secure Enclaves: An Isolation-centric Approach for Creating Secure High Performance Computing Environments

    Energy Technology Data Exchange (ETDEWEB)

    Aderholdt, Ferrol [Tennessee Technological Univ., Cookeville, TN (United States); Caldwell, Blake A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hicks, Susan Elaine [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Koch, Scott M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Naughton, III, Thomas J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Pelfrey, Daniel S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Pogge, James R [Tennessee Technological Univ., Cookeville, TN (United States); Scott, Stephen L [Tennessee Technological Univ., Cookeville, TN (United States); Shipman, Galen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Sorrillo, Lawrence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-01-01

    High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data at various security levels but in so doing are often enclaved at the highest security posture. This approach places significant restrictions on the users of the system even when processing data at a lower security level and exposes data at higher levels of confidentiality to a much broader population than otherwise necessary. The traditional approach of isolation, while effective in establishing security enclaves poses significant challenges for the use of shared infrastructure in HPC environments. This report details current state-of-the-art in virtualization, reconfigurable network enclaving via Software Defined Networking (SDN), and storage architectures and bridging techniques for creating secure enclaves in HPC environments.

  16. A C++11 implementation of arbitrary-rank tensors for high-performance computing

    Science.gov (United States)

    Aragón, Alejandro M.

    2014-11-01

    This article discusses an efficient implementation of tensors of arbitrary rank by using some of the idioms introduced by the recently published C++ ISO Standard (C++11). With the aims at providing a basic building block for high-performance computing, a single Array class template is carefully crafted, from which vectors, matrices, and even higher-order tensors can be created. An expression template facility is also built around the array class template to provide convenient mathematical syntax. As a result, by using templates, an extra high-level layer is added to the C++ language when dealing with algebraic objects and their operations, without compromising performance. The implementation is tested running on both CPU and GPU.

  17. Additive Manufacturing and High-Performance Computing: a Disruptive Latent Technology

    Science.gov (United States)

    Goodwin, Bruce

    2015-03-01

    This presentation will discuss the relationship between recent advances in Additive Manufacturing (AM) technology, High-Performance Computing (HPC) simulation and design capabilities, and related advances in Uncertainty Quantification (UQ), and then examines their impacts upon national and international security. The presentation surveys how AM accelerates the fabrication process, while HPC combined with UQ provides a fast track for the engineering design cycle. The combination of AM and HPC/UQ almost eliminates the engineering design and prototype iterative cycle, thereby dramatically reducing cost of production and time-to-market. These methods thereby present significant benefits for US national interests, both civilian and military, in an age of austerity. Finally, considering cyber security issues and the advent of the ``cloud,'' these disruptive, currently latent technologies may well enable proliferation and so challenge both nuclear and non-nuclear aspects of international security.

  18. 7th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Gracia, José; Nagel, Wolfgang; Resch, Michael

    2014-01-01

    Current advances in High Performance Computing (HPC) increasingly impact efficient software development workflows. Programmers for HPC applications need to consider trends such as increased core counts, multiple levels of parallelism, reduced memory per core, and I/O system challenges in order to derive well performing and highly scalable codes. At the same time, the increasing complexity adds further sources of program defects. While novel programming paradigms and advanced system libraries provide solutions for some of these challenges, appropriate supporting tools are indispensable. Such tools aid application developers in debugging, performance analysis, or code optimization and therefore make a major contribution to the development of robust and efficient parallel software. This book introduces a selection of the tools presented and discussed at the 7th International Parallel Tools Workshop, held in Dresden, Germany, September 3-4, 2013.  

  19. Exploring Infiniband Hardware Virtualization in OpenNebula towards Efficient High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Pais Pitta de Lacerda Ruivo, Tiago [IIT, Chicago; Bernabeu Altayo, Gerard [Fermilab; Garzoglio, Gabriele [Fermilab; Timm, Steven [Fermilab; Kim, Hyun-Woo [Fermilab; Noh, Seo-Young [KISTI, Daejeon; Raicu, Ioan [IIT, Chicago

    2014-11-11

    has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56 virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).

  20. Matrix multiplication operations with data pre-conditioning in a high performance computing architecture

    Science.gov (United States)

    Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

    2013-11-05

    Mechanisms for performing matrix multiplication operations with data pre-conditioning in a high performance computing architecture are provided. A vector load operation is performed to load a first vector operand of the matrix multiplication operation to a first target vector register. A load and splat operation is performed to load an element of a second vector operand and replicating the element to each of a plurality of elements of a second target vector register. A multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the matrix multiplication operation. The partial product of the matrix multiplication operation is accumulated with other partial products of the matrix multiplication operation.

  1. Homemade Buckeye-Pi: A Learning Many-Node Platform for High-Performance Parallel Computing

    Science.gov (United States)

    Amooie, M. A.; Moortgat, J.

    2017-12-01

    We report on the "Buckeye-Pi" cluster, the supercomputer developed in The Ohio State University School of Earth Sciences from 128 inexpensive Raspberry Pi (RPi) 3 Model B single-board computers. Each RPi is equipped with fast Quad Core 1.2GHz ARMv8 64bit processor, 1GB of RAM, and 32GB microSD card for local storage. Therefore, the cluster has a total RAM of 128GB that is distributed on the individual nodes and a flash capacity of 4TB with 512 processors, while it benefits from low power consumption, easy portability, and low total cost. The cluster uses the Message Passing Interface protocol to manage the communications between each node. These features render our platform the most powerful RPi supercomputer to date and suitable for educational applications in high-performance-computing (HPC) and handling of large datasets. In particular, we use the Buckeye-Pi to implement optimized parallel codes in our in-house simulator for subsurface media flows with the goal of achieving a massively-parallelized scalable code. We present benchmarking results for the computational performance across various number of RPi nodes. We believe our project could inspire scientists and students to consider the proposed unconventional cluster architecture as a mainstream and a feasible learning platform for challenging engineering and scientific problems.

  2. Cloud CPFP: a shotgun proteomics data analysis pipeline using cloud and high performance computing.

    Science.gov (United States)

    Trudgian, David C; Mirzaei, Hamid

    2012-12-07

    We have extended the functionality of the Central Proteomics Facilities Pipeline (CPFP) to allow use of remote cloud and high performance computing (HPC) resources for shotgun proteomics data processing. CPFP has been modified to include modular local and remote scheduling for data processing jobs. The pipeline can now be run on a single PC or server, a local cluster, a remote HPC cluster, and/or the Amazon Web Services (AWS) cloud. We provide public images that allow easy deployment of CPFP in its entirety in the AWS cloud. This significantly reduces the effort necessary to use the software, and allows proteomics laboratories to pay for compute time ad hoc, rather than obtaining and maintaining expensive local server clusters. Alternatively the Amazon cloud can be used to increase the throughput of a local installation of CPFP as necessary. We demonstrate that cloud CPFP allows users to process data at higher speed than local installations but with similar cost and lower staff requirements. In addition to the computational improvements, the web interface to CPFP is simplified, and other functionalities are enhanced. The software is under active development at two leading institutions and continues to be released under an open-source license at http://cpfp.sourceforge.net.

  3. High-performance implementation of Chebyshev filter diagonalization for interior eigenvalue computations

    Energy Technology Data Exchange (ETDEWEB)

    Pieper, Andreas [Ernst-Moritz-Arndt-Universität Greifswald (Germany); Kreutzer, Moritz [Friedrich-Alexander-Universität Erlangen-Nürnberg (Germany); Alvermann, Andreas, E-mail: alvermann@physik.uni-greifswald.de [Ernst-Moritz-Arndt-Universität Greifswald (Germany); Galgon, Martin [Bergische Universität Wuppertal (Germany); Fehske, Holger [Ernst-Moritz-Arndt-Universität Greifswald (Germany); Hager, Georg [Friedrich-Alexander-Universität Erlangen-Nürnberg (Germany); Lang, Bruno [Bergische Universität Wuppertal (Germany); Wellein, Gerhard [Friedrich-Alexander-Universität Erlangen-Nürnberg (Germany)

    2016-11-15

    We study Chebyshev filter diagonalization as a tool for the computation of many interior eigenvalues of very large sparse symmetric matrices. In this technique the subspace projection onto the target space of wanted eigenvectors is approximated with filter polynomials obtained from Chebyshev expansions of window functions. After the discussion of the conceptual foundations of Chebyshev filter diagonalization we analyze the impact of the choice of the damping kernel, search space size, and filter polynomial degree on the computational accuracy and effort, before we describe the necessary steps towards a parallel high-performance implementation. Because Chebyshev filter diagonalization avoids the need for matrix inversion it can deal with matrices and problem sizes that are presently not accessible with rational function methods based on direct or iterative linear solvers. To demonstrate the potential of Chebyshev filter diagonalization for large-scale problems of this kind we include as an example the computation of the 10{sup 2} innermost eigenpairs of a topological insulator matrix with dimension 10{sup 9} derived from quantum physics applications.

  4. Integrating GRID tools to build a computing resource broker: activities of DataGrid WP1

    International Nuclear Information System (INIS)

    Anglano, C.; Barale, S.; Gaido, L.; Guarise, A.; Lusso, S.; Werbrouck, A.

    2001-01-01

    Resources on a computational Grid are geographically distributed, heterogeneous in nature, owned by different individuals or organizations with their own scheduling policies, have different access cost models with dynamically varying loads and availability conditions. This makes traditional approaches to workload management, load balancing and scheduling inappropriate. The first work package (WP1) of the EU-funded DataGrid project is addressing the issue of optimizing the distribution of jobs onto Grid resources based on a knowledge of the status and characteristics of these resources that is necessarily out-of-date (collected in a finite amount of time at a very loosely coupled site). The authors describe the DataGrid approach in integrating existing software components (from Condor, Globus, etc.) to build a Grid Resource Broker, and the early efforts to define a workable scheduling strategy

  5. Current Capabilities at SNL for the Integration of Small Modular Reactors onto Smart Microgrids Using Sandia's Smart Microgrid Technology High Performance Computing and Advanced Manufacturing.

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, Salvador B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    Smart grids are a crucial component for enabling the nation’s future energy needs, as part of a modernization effort led by the Department of Energy. Smart grids and smart microgrids are being considered in niche applications, and as part of a comprehensive energy strategy to help manage the nation’s growing energy demands, for critical infrastructures, military installations, small rural communities, and large populations with limited water supplies. As part of a far-reaching strategic initiative, Sandia National Laboratories (SNL) presents herein a unique, three-pronged approach to integrate small modular reactors (SMRs) into microgrids, with the goal of providing economically-competitive, reliable, and secure energy to meet the nation’s needs. SNL’s triad methodology involves an innovative blend of smart microgrid technology, high performance computing (HPC), and advanced manufacturing (AM). In this report, Sandia’s current capabilities in those areas are summarized, as well as paths forward that will enable DOE to achieve its energy goals. In the area of smart grid/microgrid technology, Sandia’s current computational capabilities can model the entire grid, including temporal aspects and cyber security issues. Our tools include system development, integration, testing and evaluation, monitoring, and sustainment.

  6. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Science.gov (United States)

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  7. Improving the Eco-Efficiency of High Performance Computing Clusters Using EECluster

    Directory of Open Access Journals (Sweden)

    Alberto Cocaña-Fernández

    2016-03-01

    Full Text Available As data and supercomputing centres increase their performance to improve service quality and target more ambitious challenges every day, their carbon footprint also continues to grow, and has already reached the magnitude of the aviation industry. Also, high power consumptions are building up to a remarkable bottleneck for the expansion of these infrastructures in economic terms due to the unavailability of sufficient energy sources. A substantial part of the problem is caused by current energy consumptions of High Performance Computing (HPC clusters. To alleviate this situation, we present in this work EECluster, a tool that integrates with multiple open-source Resource Management Systems to significantly reduce the carbon footprint of clusters by improving their energy efficiency. EECluster implements a dynamic power management mechanism based on Computational Intelligence techniques by learning a set of rules through multi-criteria evolutionary algorithms. This approach enables cluster operators to find the optimal balance between a reduction in the cluster energy consumptions, service quality, and number of reconfigurations. Experimental studies using both synthetic and actual workloads from a real world cluster support the adoption of this tool to reduce the carbon footprint of HPC clusters.

  8. Peer-to-peer computing for secure high performance data copying

    International Nuclear Information System (INIS)

    Hanushevsky, A.; Trunov, A.; Cottrell, L.

    2001-01-01

    The BaBar Copy Program (bbcp) is an excellent representative of peer-to-peer (P2P) computing. It is also a pioneering application of its type in the P2P arena. Built upon the foundation of its predecessor, Secure Fast Copy (sfcp), bbcp incorporates significant improvements performance and usability. As with sfcp, bbcp uses ssh for authentication; providing an elegant and simple working model--if you can ssh to a location, you can copy files to or from that location. To fully support this notion, bbcp transparently supports 3rd party copy operations. The program also incorporates several mechanism to deal with firewall security; the bane of P2P computing. To achieve high performance in a wide area network, bbcp allows a user to independently specify, the number of parallel network streams, tcp window size, and the file I/O blocking factor. Using these parameters, data is pipelined from source to target to provide a uniform traffic pattern that maximizes router efficiency. For improved recoverability, bbcp also keeps track of copy operations so that an operation can be restarted from the point of failure at a later time; minimizing the amount of network traffic in the event of a copy failure. Here, the authors present the bbcp architecture, it's various features, and the reasons for their inclusion

  9. Accelerated Synchrotron X-ray Diffraction Data Analysis on a Heterogeneous High Performance Computing System

    Energy Technology Data Exchange (ETDEWEB)

    Qin, J; Bauer, M A, E-mail: qin.jinhui@gmail.com, E-mail: bauer@uwo.ca [Computer Science Department, University of Western Ontario, London, ON N6A 5B7 (Canada)

    2010-11-01

    The analysis of synchrotron X-ray Diffraction (XRD) data has been used by scientists and engineers to understand and predict properties of materials. However, the large volume of XRD image data and the intensive computations involved in the data analysis makes it hard for researchers to quickly reach any conclusions about the images from an experiment when using conventional XRD data analysis software. Synchrotron time is valuable and delays in XRD data analysis can impact decisions about subsequent experiments or about materials that they are investigating. In order to improve the data analysis performance, ideally to achieve near real time data analysis during an XRD experiment, we designed and implemented software for accelerated XRD data analysis. The software has been developed for a heterogeneous high performance computing (HPC) system, comprised of IBM PowerXCell 8i processors and Intel quad-core Xeon processors. This paper describes the software and reports on the improved performance. The results indicate that it is possible for XRD data to be analyzed at the rate it is being produced.

  10. Accelerated Synchrotron X-ray Diffraction Data Analysis on a Heterogeneous High Performance Computing System

    International Nuclear Information System (INIS)

    Qin, J; Bauer, M A

    2010-01-01

    The analysis of synchrotron X-ray Diffraction (XRD) data has been used by scientists and engineers to understand and predict properties of materials. However, the large volume of XRD image data and the intensive computations involved in the data analysis makes it hard for researchers to quickly reach any conclusions about the images from an experiment when using conventional XRD data analysis software. Synchrotron time is valuable and delays in XRD data analysis can impact decisions about subsequent experiments or about materials that they are investigating. In order to improve the data analysis performance, ideally to achieve near real time data analysis during an XRD experiment, we designed and implemented software for accelerated XRD data analysis. The software has been developed for a heterogeneous high performance computing (HPC) system, comprised of IBM PowerXCell 8i processors and Intel quad-core Xeon processors. This paper describes the software and reports on the improved performance. The results indicate that it is possible for XRD data to be analyzed at the rate it is being produced.

  11. Linear Subpixel Learning Algorithm for Land Cover Classification from WELD using High Performance Computing

    Science.gov (United States)

    Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.

    2017-12-01

    In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.

  12. Development of high performance scientific components for interoperability of computing packages

    Energy Technology Data Exchange (ETDEWEB)

    Gulabani, Teena Pratap [Iowa State Univ., Ames, IA (United States)

    2008-01-01

    Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achieved by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.

  13. Peer-to-Peer Computing for Secure High Performance Data Copying

    International Nuclear Information System (INIS)

    2002-01-01

    The BaBar Copy Program (bbcp) is an excellent representative of peer-to-peer (P2P) computing. It is also a pioneering application of its type in the P2P arena. Built upon the foundation of its predecessor, Secure Fast Copy (sfcp), bbcp incorporates significant improvements performance and usability. As with sfcp, bbcp uses ssh for authentication; providing an elegant and simple working model -- if you can ssh to a location, you can copy files to or from that location. To fully support this notion, bbcp transparently supports 3rd party copy operations. The program also incorporates several mechanism to deal with firewall security; the bane of P2P computing. To achieve high performance in a wide area network, bbcp allows a user to independently specify, the number of parallel network streams, tcp window size, and the file I/O blocking factor. Using these parameters, data is pipelined from source to target to provide a uniform traffic pattern that maximizes router efficiency. For improved recoverability, bbcp also keeps track of copy operations so that an operation can be restarted from the point of failure at a later time; minimizing the amount of network traffic in the event of a copy failure. Here, we preset the bbcp architecture, it's various features, and the reasons for their inclusion

  14. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    International Nuclear Information System (INIS)

    Desai, Ajit; Pettit, Chris; Poirel, Dominique; Sarkar, Abhijit

    2017-01-01

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolution in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.

  15. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    Science.gov (United States)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  16. Towards the development of run times leveraging virtualization for high performance computing

    International Nuclear Information System (INIS)

    Diakhate, F.

    2010-12-01

    In recent years, there has been a growing interest in using virtualization to improve the efficiency of data centers. This success is rooted in virtualization's excellent fault tolerance and isolation properties, in the overall flexibility it brings, and in its ability to exploit multi-core architectures efficiently. These characteristics also make virtualization an ideal candidate to tackle issues found in new compute cluster architectures. However, in spite of recent improvements in virtualization technology, overheads in the execution of parallel applications remain, which prevent its use in the field of high performance computing. In this thesis, we propose a virtual device dedicated to message passing between virtual machines, so as to improve the performance of parallel applications executed in a cluster of virtual machines. We also introduce a set of techniques facilitating the deployment of virtualized parallel applications. These functionalities have been implemented as part of a runtime system which allows to benefit from virtualization's properties in a way that is as transparent as possible to the user while minimizing performance overheads. (author)

  17. Applying Machine Learning and High Performance Computing to Water Quality Assessment and Prediction

    Directory of Open Access Journals (Sweden)

    Ruijian Zhang

    2017-12-01

    Full Text Available Water quality assessment and prediction is a more and more important issue. Traditional ways either take lots of time or they can only do assessments. In this research, by applying machine learning algorithm to a long period time of water attributes’ data; we can generate a decision tree so that it can predict the future day’s water quality in an easy and efficient way. The idea is to combine the traditional ways and the computer algorithms together. Using machine learning algorithms, the assessment of water quality will be far more efficient, and by generating the decision tree, the prediction will be quite accurate. The drawback of the machine learning modeling is that the execution takes quite long time, especially when we employ a better accuracy but more time-consuming algorithm in clustering. Therefore, we applied the high performance computing (HPC System to deal with this problem. Up to now, the pilot experiments have achieved very promising preliminary results. The visualized water quality assessment and prediction obtained from this project would be published in an interactive website so that the public and the environmental managers could use the information for their decision making.

  18. Using High Performance Computing to Examine the Processes of Neurogenesis Underlying Pattern Separation/Completion of Episodic Information.

    Energy Technology Data Exchange (ETDEWEB)

    Aimone, James Bradley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Betty, Rita [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-03-01

    Using High Performance Computing to Examine the Processes of Neurogenesis Underlying Pattern Separation/Completion of Episodic Information - Sandia researchers developed novel methods and metrics for studying the computational function of neurogenesis, thus generating substantial impact to the neuroscience and neural computing communities. This work could benefit applications in machine learning and other analysis activities.

  19. Confabulation Based Real-time Anomaly Detection for Wide-area Surveillance Using Heterogeneous High Performance Computing Architecture

    Science.gov (United States)

    2015-06-01

    CONFABULATION BASED REAL-TIME ANOMALY DETECTION FOR WIDE-AREA SURVEILLANCE USING HETEROGENEOUS HIGH PERFORMANCE COMPUTING ARCHITECTURE SYRACUSE...DETECTION FOR WIDE-AREA SURVEILLANCE USING HETEROGENEOUS HIGH PERFORMANCE COMPUTING ARCHITECTURE 5a. CONTRACT NUMBER FA8750-12-1-0251 5b. GRANT...processors including graphic processor units (GPUs) and Intel Xeon Phi processors. Experimental results showed significant speedups, which can enable

  20. The Convergence of High Performance Computing and Large Scale Data Analytics

    Science.gov (United States)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  1. Workflow Support for Advanced Grid-Enabled Computing

    OpenAIRE

    Xu, Fenglian; Eres, M.H.; Tao, Feng; Cox, Simon J.

    2004-01-01

    The Geodise project brings computer scientists and engineer's skills together to build up a service-oriented computing environmnet for engineers to perform complicated computations in a distributed system. The workflow tool is a front GUI to provide a full life cycle of workflow functions for Grid-enabled computing. The full life cycle of workflow functions have been enhanced based our initial research and development. The life cycle starts with a composition of a workflow, followed by an ins...

  2. Grid Computing Application for Brain Magnetic Resonance Image Processing

    International Nuclear Information System (INIS)

    Valdivia, F; Crépeault, B; Duchesne, S

    2012-01-01

    This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

  3. Strengthening LLNL Missions through Laboratory Directed Research and Development in High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Willis, D. K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-12-01

    High performance computing (HPC) has been a defining strength of Lawrence Livermore National Laboratory (LLNL) since its founding. Livermore scientists have designed and used some of the world’s most powerful computers to drive breakthroughs in nearly every mission area. Today, the Laboratory is recognized as a world leader in the application of HPC to complex science, technology, and engineering challenges. Most importantly, HPC has been integral to the National Nuclear Security Administration’s (NNSA’s) Stockpile Stewardship Program—designed to ensure the safety, security, and reliability of our nuclear deterrent without nuclear testing. A critical factor behind Lawrence Livermore’s preeminence in HPC is the ongoing investments made by the Laboratory Directed Research and Development (LDRD) Program in cutting-edge concepts to enable efficient utilization of these powerful machines. Congress established the LDRD Program in 1991 to maintain the technical vitality of the Department of Energy (DOE) national laboratories. Since then, LDRD has been, and continues to be, an essential tool for exploring anticipated needs that lie beyond the planning horizon of our programs and for attracting the next generation of talented visionaries. Through LDRD, Livermore researchers can examine future challenges, propose and explore innovative solutions, and deliver creative approaches to support our missions. The present scientific and technical strengths of the Laboratory are, in large part, a product of past LDRD investments in HPC. Here, we provide seven examples of LDRD projects from the past decade that have played a critical role in building LLNL’s HPC, computer science, mathematics, and data science research capabilities, and describe how they have impacted LLNL’s mission.

  4. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda

    2016-03-04

    Exascale systems are predicted to have approximately 1 billion cores, assuming gigahertz cores. Limitations on affordable network topologies for distributed memory systems of such massive scale bring new challenges to the currently dominant parallel programing model. Currently, there are many efforts to evaluate the hardware and software bottlenecks of exascale designs. It is therefore of interest to model application performance and to understand what changes need to be made to ensure extrapolated scalability. The fast multipole method (FMM) was originally developed for accelerating N-body problems in astrophysics and molecular dynamics but has recently been extended to a wider range of problems. Its high arithmetic intensity combined with its linear complexity and asynchronous communication patterns make it a promising algorithm for exascale systems. In this paper, we discuss the challenges for FMM on current parallel computers and future exascale architectures, with a focus on internode communication. We focus on the communication part only; the efficiency of the computational kernels are beyond the scope of the present study. We develop a performance model that considers the communication patterns of the FMM and observe a good match between our model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization of internode communication in FMM that validates the model against actual measurements of communication time. The ultimate communication model is predictive in an absolute sense; however, on complex systems, this objective is often out of reach or of a difficulty out of proportion to its benefit when there exists a simpler model that is inexpensive and sufficient to guide coding decisions leading to improved scaling. The current model provides such guidance.

  5. Direct numerical simulation of reactor two-phase flows enabled by high-performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Jun; Cambareri, Joseph J.; Brown, Cameron S.; Feng, Jinyong; Gouws, Andre; Li, Mengnan; Bolotnov, Igor A.

    2018-04-01

    Nuclear reactor two-phase flows remain a great engineering challenge, where the high-resolution two-phase flow database which can inform practical model development is still sparse due to the extreme reactor operation conditions and measurement difficulties. Owing to the rapid growth of computing power, the direct numerical simulation (DNS) is enjoying a renewed interest in investigating the related flow problems. A combination between DNS and an interface tracking method can provide a unique opportunity to study two-phase flows based on first principles calculations. More importantly, state-of-the-art high-performance computing (HPC) facilities are helping unlock this great potential. This paper reviews the recent research progress of two-phase flow DNS related to reactor applications. The progress in large-scale bubbly flow DNS has been focused not only on the sheer size of those simulations in terms of resolved Reynolds number, but also on the associated advanced modeling and analysis techniques. Specifically, the current areas of active research include modeling of sub-cooled boiling, bubble coalescence, as well as the advanced post-processing toolkit for bubbly flow simulations in reactor geometries. A novel bubble tracking method has been developed to track the evolution of bubbles in two-phase bubbly flow. Also, spectral analysis of DNS database in different geometries has been performed to investigate the modulation of the energy spectrum slope due to bubble-induced turbulence. In addition, the single-and two-phase analysis results are presented for turbulent flows within the pressurized water reactor (PWR) core geometries. The related simulations are possible to carry out only with the world leading HPC platforms. These simulations are allowing more complex turbulence model development and validation for use in 3D multiphase computational fluid dynamics (M-CFD) codes.

  6. Tackling some of the most intricate geophysical challenges via high-performance computing

    Science.gov (United States)

    Khosronejad, A.

    2016-12-01

    Recently, world has been witnessing significant enhancements in computing power of supercomputers. Computer clusters in conjunction with the advanced mathematical algorithms has set the stage for developing and applying powerful numerical tools to tackle some of the most intricate geophysical challenges that today`s engineers face. One such challenge is to understand how turbulent flows, in real-world settings, interact with (a) rigid and/or mobile complex bed bathymetry of waterways and sea-beds in the coastal areas; (b) objects with complex geometry that are fully or partially immersed; and (c) free-surface of waterways and water surface waves in the coastal area. This understanding is especially important because the turbulent flows in real-world environments are often bounded by geometrically complex boundaries, which dynamically deform and give rise to multi-scale and multi-physics transport phenomena, and characterized by multi-lateral interactions among various phases (e.g. air/water/sediment phases). Herein, I present some of the multi-scale and multi-physics geophysical fluid mechanics processes that I have attempted to study using an in-house high-performance computational model, the so-called VFS-Geophysics. More specifically, I will present the simulation results of turbulence/sediment/solute/turbine interactions in real-world settings. Parts of the simulations I present are performed to gain scientific insights into the processes such as sand wave formation (A. Khosronejad, and F. Sotiropoulos, (2014), Numerical simulation of sand waves in a turbulent open channel flow, Journal of Fluid Mechanics, 753:150-216), while others are carried out to predict the effects of climate change and large flood events on societal infrastructures ( A. Khosronejad, et al., (2016), Large eddy simulation of turbulence and solute transport in a forested headwater stream, Journal of Geophysical Research:, doi: 10.1002/2014JF003423).

  7. GLOA: A New Job Scheduling Algorithm for Grid Computing

    Directory of Open Access Journals (Sweden)

    Zahra Pooranian

    2013-03-01

    Full Text Available The purpose of grid computing is to produce a virtual supercomputer by using free resources available through widespread networks such as the Internet. This resource distribution, changes in resource availability, and an unreliable communication infrastructure pose a major challenge for efficient resource allocation. Because of the geographical spread of resources and their distributed management, grid scheduling is considered to be a NP-complete problem. It has been shown that evolutionary algorithms offer good performance for grid scheduling. This article uses a new evaluation (distributed algorithm inspired by the effect of leaders in social groups, the group leaders' optimization algorithm (GLOA, to solve the problem of scheduling independent tasks in a grid computing system. Simulation results comparing GLOA with several other evaluation algorithms show that GLOA produces shorter makespans.

  8. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    CERN Document Server

    INSPIRE-00416173; Kebschull, Udo

    2015-01-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machin...

  9. Soil Erosion Estimation Using Grid-based Computation

    Directory of Open Access Journals (Sweden)

    Josef Vlasák

    2005-06-01

    Full Text Available Soil erosion estimation is an important part of a land consolidation process. Universal soil loss equation (USLE was presented by Wischmeier and Smith. USLE computation uses several factors, namely R – rainfall factor, K – soil erodability, L – slope length factor, S – slope gradient factor, C – cropping management factor, and P – erosion control management factor. L and S factors are usually combined to one LS factor – Topographic factor. The single factors are determined from several sources, such as DTM (Digital Terrain Model, BPEJ – soil type map, aerial and satellite images, etc. A conventional approach to the USLE computation, which is widely used in the Czech Republic, is based on the selection of characteristic profiles for which all above-mentioned factors must be determined. The result (G – annual soil loss of such computation is then applied for a whole area (slope of interest. Another approach to the USLE computation uses grids as a main data-structure. A prerequisite for a grid-based USLE computation is that each of the above-mentioned factors exists as a separate grid layer. The crucial step in this computation is a selection of appropriate grid resolution (grid cell size. A large cell size can cause an undesirable precision degradation. Too small cell size can noticeably slow down the whole computation. Provided that the cell size is derived from the source’s precision, the appropriate cell size for the Czech Republic varies from 30m to 50m. In some cases, especially when new surveying was done, grid computations can be performed with higher accuracy, i.e. with a smaller grid cell size. In such case, we have proposed a new method using the two-step computation. The first step computation uses a bigger cell size and is designed to identify higher erosion spots. The second step then uses a smaller cell size but it make the computation only the area identified in the previous step. This decomposition allows a

  10. Operational mesoscale atmospheric dispersion prediction using high performance parallel computing cluster for emergency response

    International Nuclear Information System (INIS)

    Srinivas, C.V.; Venkatesan, R.; Muralidharan, N.V.; Das, Someshwar; Dass, Hari; Eswara Kumar, P.

    2005-08-01

    An operational atmospheric dispersion prediction system is implemented on a cluster super computer for 'Online Emergency Response' for Kalpakkam nuclear site. The numerical system constitutes a parallel version of a nested grid meso-scale meteorological model MM5 coupled to a random walk particle dispersion model FLEXPART. The system provides 48 hour forecast of the local weather and radioactive plume dispersion due to hypothetical air borne releases in a range of 100 km around the site. The parallel code was implemented on different cluster configurations like distributed and shared memory systems. Results of MM5 run time performance for 1-day prediction are reported on all the machines available for testing. A reduction of 5 times in runtime is achieved using 9 dual Xeon nodes (18 physical/36 logical processors) compared to a single node sequential run. Based on the above run time results a cluster computer facility with 9-node Dual Xeon is commissioned at IGCAR for model operation. The run time of a triple nested domain MM5 is about 4 h for 24 h forecast. The system has been operated continuously for a few months and results were ported on the IMSc home page. Initial and periodic boundary condition data for MM5 are provided by NCMRWF, New Delhi. An alternative source is found to be NCEP, USA. These two sources provide the input data to the operational models at different spatial and temporal resolutions and using different assimilation methods. A comparative study on the results of forecast is presented using these two data sources for present operational use. Slight improvement is noticed in rainfall, winds, geopotential heights and the vertical atmospheric structure while using NCEP data probably because of its high spatial and temporal resolution. (author)

  11. Technologies and tools for high-performance distributed computing. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Karonis, Nicholas T.

    2000-05-01

    In this project we studied the practical use of the MPI message-passing interface in advanced distributed computing environments. We built on the existing software infrastructure provided by the Globus Toolkit{trademark}, the MPICH portable implementation of MPI, and the MPICH-G integration of MPICH with Globus. As a result of this project we have replaced MPICH-G with its successor MPICH-G2, which is also an integration of MPICH with Globus. MPICH-G2 delivers significant improvements in message passing performance when compared to its predecessor MPICH-G and was based on superior software design principles resulting in a software base that was much easier to make the functional extensions and improvements we did. Using Globus services we replaced the default implementation of MPI's collective operations in MPICH-G2 with more efficient multilevel topology-aware collective operations which, in turn, led to the development of a new timing methodology for broadcasts [8]. MPICH-G2 was extended to include client/server functionality from the MPI-2 standard [23] to facilitate remote visualization applications and, through the use of MPI idioms, MPICH-G2 provided application-level control of quality-of-service parameters as well as application-level discovery of underlying Grid-topology information. Finally, MPICH-G2 was successfully used in a number of applications including an award-winning record-setting computation in numerical relativity. In the sections that follow we describe in detail the accomplishments of this project, we present experimental results quantifying the performance improvements, and conclude with a discussion of our applications experiences. This project resulted in a significant increase in the utility of MPICH-G2.

  12. New challenges in grid generation and adaptivity for scientific computing

    CERN Document Server

    Formaggia, Luca

    2015-01-01

    This volume collects selected contributions from the “Fourth Tetrahedron Workshop on Grid Generation for Numerical Computations”, which was held in Verbania, Italy in July 2013. The previous editions of this Workshop were hosted by the Weierstrass Institute in Berlin (2005), by INRIA Rocquencourt in Paris (2007), and by Swansea University (2010). This book covers different, though related, aspects of the field: the generation of quality grids for complex three-dimensional geometries; parallel mesh generation algorithms; mesh adaptation, including both theoretical and implementation aspects; grid generation and adaptation on surfaces – all with an interesting mix of numerical analysis, computer science and strongly application-oriented problems.

  13. Dynamic grid refinement for partial differential equations on parallel computers

    International Nuclear Information System (INIS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems. 6 refs

  14. High-speed narrowband PLC - High-performance Access Powerline Communication structures in smart grid; High-Speed Narrowband PLC. Leistungsfaehige Access-Powerline-Kommunikationsstrukturen im Smart Grid

    Energy Technology Data Exchange (ETDEWEB)

    Offner, Georg [devolo AG, Aachen (Germany)

    2012-07-01

    The smart grid provides a stable operation of a decentralized power system which is fed more and more by small providers by means of solar technology and wind power technology. Furthermore, commercial and private clients profit from the smart grid, as they may capture information about their current consumption in real time or can obtain these data from the Internet. The author of the contribution under consideration presents new approaches to access powerline communications that enable efficient communication between IPv6 based household meter and network station. The contribution under consideration describes the technologies used as well as practical experiences and initial results from field tests. Here insights from the integration of G3-PLC are in the foreground.

  15. Lecture 7: Worldwide LHC Computing Grid Overview

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    This presentation will introduce in an informal, but technically correct way the challenges that are linked to the needs of massively distributed computing architectures in the context of the LHC offline computing. The topics include technological and organizational aspects touching many aspects of LHC computing, from data access, to maintenance of large databases and huge collections of files, to the organization of computing farms and monitoring. Fabrizio Furano holds a Ph.D in Computer Science and has worked in the field of Computing for High Energy Physics for many years. Some of his preferred topics include application architectures, system design and project management, with focus on performance and scalability of data access. Fabrizio has experience in a wide variety of environments, from private companies to academic research in particular in object oriented methodologies, mainly using C++. He has also teaching experience at university level in Software Engineering and C++ Programming.

  16. Research and development of grid computing technology in center for computational science and e-systems of Japan Atomic Energy Agency

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2007-01-01

    Center for Computational Science and E-systems of the Japan Atomic Energy Agency (CCSE/JAEA) has carried out R and D of grid computing technology. Since 1995, R and D to realize computational assistance for researchers called Seamless Thinking Aid (STA) and then to share intellectual resources called Information Technology Based Laboratory (ITBL) have been conducted, leading to construct an intelligent infrastructure for the atomic energy research called Atomic Energy Grid InfraStructure (AEGIS) under the Japanese national project 'Development and Applications of Advanced High-Performance Supercomputer'. It aims to enable synchronization of three themes: 1) Computer-Aided Research and Development (CARD) to realize and environment for STA, 2) Computer-Aided Engineering (CAEN) to establish Multi Experimental Tools (MEXT), and 3) Computer Aided Science (CASC) to promote the Atomic Energy Research and Investigation (AERI). This article reviewed achievements in R and D of grid computing technology so far obtained. (T. Tanaka)

  17. A Lightweight, High-performance I/O Management Package for Data-intensive Computing

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Jun

    2011-06-22

    Our group has been working with ANL collaborators on the topic bridging the gap between parallel file system and local file system during the course of this project period. We visited Argonne National Lab -- Dr. Robert Ross's group for one week in the past summer 2007. We looked over our current project progress and planned the activities for the incoming years 2008-09. The PI met Dr. Robert Ross several times such as HEC FSIO workshop 08, SC08 and SC10. We explored the opportunities to develop a production system by leveraging our current prototype to (SOGP+PVFS) a new PVFS version. We delivered SOGP+PVFS codes to ANL PVFS2 group in 2008.We also talked about exploring a potential project on developing new parallel programming models and runtime systems for data-intensive scalable computing (DISC). The methodology is to evolve MPI towards DISC by incorporating some functions of Google MapReduce parallel programming model. More recently, we are together exploring how to leverage existing works to perform (1) coordination/aggregation of local I/O operations prior to movement over the WAN, (2) efficient bulk data movement over the WAN, (3) latency hiding techniques for latency-intensive operations. Since 2009, we start applying Hadoop/MapReduce to some HEC applications with LANL scientists John Bent and Salman Habib. Another on-going work is to improve checkpoint performance at I/O forwarding Layer for the Road Runner super computer with James Nuetz and Gary Gridder at LANL. Two senior undergraduates from our research group did summer internships about high-performance file and storage system projects in LANL since 2008 for consecutive three years. Both of them are now pursuing Ph.D. degree in our group and will be 4th year in the PhD program in Fall 2011 and go to LANL to advance two above-mentioned works during this winter break. Since 2009, we have been collaborating with several computer scientists (Gary Grider, John bent, Parks Fields, James Nunez, Hsing

  18. National cyber defense high performance computing and analysis : concepts, planning and roadmap.

    Energy Technology Data Exchange (ETDEWEB)

    Hamlet, Jason R.; Keliiaa, Curtis M.

    2010-09-01

    There is a national cyber dilemma that threatens the very fabric of government, commercial and private use operations worldwide. Much is written about 'what' the problem is, and though the basis for this paper is an assessment of the problem space, we target the 'how' solution space of the wide-area national information infrastructure through the advancement of science, technology, evaluation and analysis with actionable results intended to produce a more secure national information infrastructure and a comprehensive national cyber defense capability. This cybersecurity High Performance Computing (HPC) analysis concepts, planning and roadmap activity was conducted as an assessment of cybersecurity analysis as a fertile area of research and investment for high value cybersecurity wide-area solutions. This report and a related SAND2010-4765 Assessment of Current Cybersecurity Practices in the Public Domain: Cyber Indications and Warnings Domain report are intended to provoke discussion throughout a broad audience about developing a cohesive HPC centric solution to wide-area cybersecurity problems.

  19. High-Performance Computer Modeling of the Cosmos-Iridium Collision

    Energy Technology Data Exchange (ETDEWEB)

    Olivier, S; Cook, K; Fasenfest, B; Jefferson, D; Jiang, M; Leek, J; Levatin, J; Nikolaev, S; Pertica, A; Phillion, D; Springer, K; De Vries, W

    2009-08-28

    This paper describes the application of a new, integrated modeling and simulation framework, encompassing the space situational awareness (SSA) enterprise, to the recent Cosmos-Iridium collision. This framework is based on a flexible, scalable architecture to enable efficient simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel, high-performance computer systems available, for example, at Lawrence Livermore National Laboratory. We will describe the application of this framework to the recent collision of the Cosmos and Iridium satellites, including (1) detailed hydrodynamic modeling of the satellite collision and resulting debris generation, (2) orbital propagation of the simulated debris and analysis of the increased risk to other satellites (3) calculation of the radar and optical signatures of the simulated debris and modeling of debris detection with space surveillance radar and optical systems (4) determination of simulated debris orbits from modeled space surveillance observations and analysis of the resulting orbital accuracy, (5) comparison of these modeling and simulation results with Space Surveillance Network observations. We will also discuss the use of this integrated modeling and simulation framework to analyze the risks and consequences of future satellite collisions and to assess strategies for mitigating or avoiding future incidents, including the addition of new sensor systems, used in conjunction with the Space Surveillance Network, for improving space situational awareness.

  20. Analysis of Application Power and Schedule Composition in a High Performance Computing Environment

    Energy Technology Data Exchange (ETDEWEB)

    Elmore, Ryan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gruchalla, Kenny [National Renewable Energy Lab. (NREL), Golden, CO (United States); Phillips, Caleb [National Renewable Energy Lab. (NREL), Golden, CO (United States); Purkayastha, Avi [National Renewable Energy Lab. (NREL), Golden, CO (United States); Wunder, Nick [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-05

    As the capacity of high performance computing (HPC) systems continues to grow, small changes in energy management have the potential to produce significant energy savings. In this paper, we employ an extensive informatics system for aggregating and analyzing real-time performance and power use data to evaluate energy footprints of jobs running in an HPC data center. We look at the effects of algorithmic choices for a given job on the resulting energy footprints, and analyze application-specific power consumption, and summarize average power use in the aggregate. All of these views reveal meaningful power variance between classes of applications as well as chosen methods for a given job. Using these data, we discuss energy-aware cost-saving strategies based on reordering the HPC job schedule. Using historical job and power data, we present a hypothetical job schedule reordering that: (1) reduces the facility's peak power draw and (2) manages power in conjunction with a large-scale photovoltaic array. Lastly, we leverage this data to understand the practical limits on predicting key power use metrics at the time of submission.

  1. Performance Analysis of Ivshmem for High-Performance Computing in Virtual Machines

    Science.gov (United States)

    Ivanovic, Pavle; Richter, Harald

    2018-01-01

    High-Performance computing (HPC) is rarely accomplished via virtual machines (VMs). In this paper, we present a remake of ivshmem which can change this. Ivshmem was a shared memory (SHM) between virtual machines on the same server, with SHM-access synchronization included, until about 5 years ago when newer versions of Linux and its virtualization library libvirt evolved. We restored that SHM-access synchronization feature because it is indispensable for HPC and made ivshmem runnable with contemporary versions of Linux, libvirt, KVM, QEMU and especially MPICH, which is an implementation of MPI - the standard HPC communication library. Additionally, MPICH was transparently modified by us to get ivshmem included, resulting in a three to ten times performance improvement compared to TCP/IP. Furthermore, we have transparently replaced MPI_PUT, a single-side MPICH communication mechanism, by an own MPI_PUT wrapper. As a result, our ivshmem even surpasses non-virtualized SHM data transfers for block lengths greater than 512 KBytes, showing the benefits of virtualization. All improvements were possible without using SR-IOV.

  2. A parallel calibration utility for WRF-Hydro on high performance computers

    Science.gov (United States)

    Wang, J.; Wang, C.; Kotamarthi, V. R.

    2017-12-01

    A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.

  3. Parallel grid generation algorithm for distributed memory computers

    Science.gov (United States)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  4. Grid computing in high energy physics

    CERN Document Server

    Avery, P

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software r...

  5. High performance computing, supercomputing, náročné počítání

    Czech Academy of Sciences Publication Activity Database

    Okrouhlík, Miloslav

    2003-01-01

    Roč. 10, č. 5 (2003), s. 429-438 ISSN 1210-2717 R&D Projects: GA ČR GA101/02/0072 Institutional research plan: CEZ:AV0Z2076919 Keywords : high performance computing * vector and parallel computers * programing tools for parellelization Subject RIV: BI - Acoustics

  6. Simulation and high performance computing-Building a predictive capability for fusion

    International Nuclear Information System (INIS)

    Strand, P.I.; Coelho, R.; Coster, D.; Eriksson, L.-G.; Imbeaux, F.; Guillerminet, Bernard

    2010-01-01

    The Integrated Tokamak Modelling Task Force (ITM-TF) is developing an infrastructure where the validation needs, as being formulated in terms of multi-device data access and detailed physics comparisons aiming for inclusion of synthetic diagnostics in the simulation chain, are key components. As the activity and the modelling tools are aimed for general use, although focused on ITER plasmas, a device independent approach to data transport and a standardized approach to data management (data structures, naming, and access) is being developed in order to allow cross-validation between different fusion devices using a single toolset. Extensive work has already gone into, and is continuing to go into, the development of standardized descriptions of the data (Consistent Physical Objects). The longer term aim is a complete simulation platform which is expected to last and be extended in different ways for the coming 30 years. The technical underpinning is therefore of vital importance. In particular the platform needs to be extensible and open-ended to be able to take full advantage of not only today's most advanced technologies but also be able to marshal future developments. As a full level comprehensive prediction of ITER physics rapidly becomes expensive in terms of computing resources, the simulation framework needs to be able to use both grid and HPC computing facilities. Hence data access and code coupling technologies are required to be available for a heterogeneous, possibly distributed, environment. The developments in this area are pursued in a separate project-EUFORIA (EU Fusion for ITER Applications) which is providing about 15 professional person year (ppy) per annum from 14 different institutes. The range and size of the activity is not only technically challenging but is providing some unique management challenges in that a large and geographically distributed team (a truly pan-European set of researchers) need to be coordinated on a fairly detailed

  7. First Experiences with LHC Grid Computing and Distributed Analysis

    CERN Document Server

    Fisk, Ian

    2010-01-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  8. High-performance floating-point image computing workstation for medical applications

    Science.gov (United States)

    Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin

    1990-07-01

    The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e

  9. Exploring the meteorological potential for planning a high performance European electricity super-grid: optimal power capacity distribution among countries

    Science.gov (United States)

    Santos-Alamillos, Francisco J.; Brayshaw, David J.; Methven, John; Thomaidis, Nikolaos S.; Ruiz-Arias, José A.; Pozo-Vázquez, David

    2017-11-01

    The concept of a European super-grid for electricity presents clear advantages for a reliable and affordable renewable power production (photovoltaics and wind). Based on the mean-variance portfolio optimization analysis, we explore optimal scenarios for the allocation of new renewable capacity at national level in order to provide to energy decision-makers guidance about which regions should be mostly targeted to either maximize total production or reduce its day-to-day variability. The results show that the existing distribution of renewable generation capacity across Europe is far from optimal: i.e. a ‘better’ spatial distribution of resources could have been achieved with either a ~31% increase in mean power supply (for the same level of day-to-day variability) or a ~37.5% reduction in day-to-day variability (for the same level of mean productivity). Careful planning of additional increments in renewable capacity at the European level could, however, act to significantly ameliorate this deficiency. The choice of where to deploy resources depends, however, on the objective being pursued—if the goal is to maximize average output, then new capacity is best allocated in the countries with highest resources, whereas investment in additional capacity in a north/south dipole pattern across Europe would act to most reduce daily variations and thus decrease the day-to-day volatility of renewable power supply.

  10. Leveraging High Performance Computing for Managing Large and Evolving Data Collections

    Directory of Open Access Journals (Sweden)

    Ritu Arora

    2014-10-01

    Full Text Available The process of developing a digital collection in the context of a research project often involves a pipeline pattern during which data growth, data types, and data authenticity need to be assessed iteratively in relation to the different research steps and in the interest of archiving. Throughout a project’s lifecycle curators organize newly generated data while cleaning and integrating legacy data when it exists, and deciding what data will be preserved for the long term. Although these actions should be part of a well-oiled data management workflow, there are practical challenges in doing so if the collection is very large and heterogeneous, or is accessed by several researchers contemporaneously. There is a need for data management solutions that can help curators with efficient and on-demand analyses of their collection so that they remain well-informed about its evolving characteristics. In this paper, we describe our efforts towards developing a workflow to leverage open science High Performance Computing (HPC resources for routinely and efficiently conducting data management tasks on large collections. We demonstrate that HPC resources and techniques can significantly reduce the time for accomplishing critical data management tasks, and enable a dynamic archiving throughout the research process. We use a large archaeological data collection with a long and complex formation history as our test case. We share our experiences in adopting open science HPC resources for large-scale data management, which entails understanding usage of the open source HPC environment and training users. These experiences can be generalized to meet the needs of other data curators working with large collections.

  11. High performance communication by people with paralysis using an intracortical brain-computer interface

    Science.gov (United States)

    Pandarinath, Chethan; Nuyujukian, Paul; Blabe, Christine H; Sorice, Brittany L; Saab, Jad; Willett, Francis R; Hochberg, Leigh R

    2017-01-01

    Brain-computer interfaces (BCIs) have the potential to restore communication for people with tetraplegia and anarthria by translating neural activity into control signals for assistive communication devices. While previous pre-clinical and clinical studies have demonstrated promising proofs-of-concept (Serruya et al., 2002; Simeral et al., 2011; Bacher et al., 2015; Nuyujukian et al., 2015; Aflalo et al., 2015; Gilja et al., 2015; Jarosiewicz et al., 2015; Wolpaw et al., 1998; Hwang et al., 2012; Spüler et al., 2012; Leuthardt et al., 2004; Taylor et al., 2002; Schalk et al., 2008; Moran, 2010; Brunner et al., 2011; Wang et al., 2013; Townsend and Platsko, 2016; Vansteensel et al., 2016; Nuyujukian et al., 2016; Carmena et al., 2003; Musallam et al., 2004; Santhanam et al., 2006; Hochberg et al., 2006; Ganguly et al., 2011; O’Doherty et al., 2011; Gilja et al., 2012), the performance of human clinical BCI systems is not yet high enough to support widespread adoption by people with physical limitations of speech. Here we report a high-performance intracortical BCI (iBCI) for communication, which was tested by three clinical trial participants with paralysis. The system leveraged advances in decoder design developed in prior pre-clinical and clinical studies (Gilja et al., 2015; Kao et al., 2016; Gilja et al., 2012). For all three participants, performance exceeded previous iBCIs (Bacher et al., 2015; Jarosiewicz et al., 2015) as measured by typing rate (by a factor of 1.4–4.2) and information throughput (by a factor of 2.2–4.0). This high level of performance demonstrates the potential utility of iBCIs as powerful assistive communication devices for people with limited motor function. Clinical Trial No: NCT00912041 DOI: http://dx.doi.org/10.7554/eLife.18554.001 PMID:28220753

  12. LIAR -- A computer program for the modeling and simulation of high performance linacs

    International Nuclear Information System (INIS)

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-04-01

    The computer program LIAR (LInear Accelerator Research Code) is a numerical modeling and simulation tool for high performance linacs. Amongst others, it addresses the needs of state-of-the-art linear colliders where low emittance, high-intensity beams must be accelerated to energies in the 0.05-1 TeV range. LIAR is designed to be used for a variety of different projects. LIAR allows the study of single- and multi-particle beam dynamics in linear accelerators. It calculates emittance dilutions due to wakefield deflections, linear and non-linear dispersion and chromatic effects in the presence of multiple accelerator imperfections. Both single-bunch and multi-bunch beams can be simulated. Several basic and advanced optimization schemes are implemented. Present limitations arise from the incomplete treatment of bending magnets and sextupoles. A major objective of the LIAR project is to provide an open programming platform for the accelerator physics community. Due to its design, LIAR allows straight-forward access to its internal FORTRAN data structures. The program can easily be extended and its interactive command language ensures maximum ease of use. Presently, versions of LIAR are compiled for UNIX and MS Windows operating systems. An interface for the graphical visualization of results is provided. Scientific graphs can be saved in the PS and EPS file formats. In addition a Mathematica interface has been developed. LIAR now contains more than 40,000 lines of source code in more than 130 subroutines. This report describes the theoretical basis of the program, provides a reference for existing features and explains how to add further commands. The LIAR home page and the ONLINE version of this manual can be accessed under: http://www.slac.stanford.edu/grp/arb/rwa/liar.htm

  13. High performance computation of landscape genomic models including local indicators of spatial association.

    Science.gov (United States)

    Stucki, S; Orozco-terWengel, P; Forester, B R; Duruz, S; Colli, L; Masembe, C; Negrini, R; Landguth, E; Jones, M R; Bruford, M W; Taberlet, P; Joost, S

    2017-09-01

    With the increasing availability of both molecular and topo-climatic data, the main challenges facing landscape genomics - that is the combination of landscape ecology with population genomics - include processing large numbers of models and distinguishing between selection and demographic processes (e.g. population structure). Several methods address the latter, either by estimating a null model of population history or by simultaneously inferring environmental and demographic effects. Here we present samβada, an approach designed to study signatures of local adaptation, with special emphasis on high performance computing of large-scale genetic and environmental data sets. samβada identifies candidate loci using genotype-environment associations while also incorporating multivariate analyses to assess the effect of many environmental predictor variables. This enables the inclusion of explanatory variables representing population structure into the models to lower the occurrences of spurious genotype-environment associations. In addition, samβada calculates local indicators of spatial association for candidate loci to provide information on whether similar genotypes tend to cluster in space, which constitutes a useful indication of the possible kinship between individuals. To test the usefulness of this approach, we carried out a simulation study and analysed a data set from Ugandan cattle to detect signatures of local adaptation with samβada, bayenv, lfmm and an F ST outlier method (FDIST approach in arlequin) and compare their results. samβada - an open source software for Windows, Linux and Mac OS X available at http://lasig.epfl.ch/sambada - outperforms other approaches and better suits whole-genome sequence data processing. © 2016 The Authors. Molecular Ecology Resources Published by John Wiley & Sons Ltd.

  14. Computing challenges in HEP for WLHC grid

    CERN Document Server

    Muralidharan, Servesh

    2017-01-01

    As CERN moves towards preparation for increasing the luminosity of the particle beam towards HL-LHC, predictions shows computing demand would out grow our conservative scaling estimates by over ten times. Fortunately we are talking about a time scale of roughly ten years to develop new techniques and novel solutions to address this gap in compute resources. Experiments at CERN face a unique scenario where in they need to scale both latency sensitive workloads such as data acquisition of the detectors and throughput based ones such as simulations and reconstruction of high level events and physics processes. In this talk we cover some of the ongoing research at tier-0 in CERN which investigates several aspects of throughput sensitive workloads that consume significant compute cycles.

  15. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    Science.gov (United States)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the

  16. Computation of Asteroid Proper Elements on the Grid

    Science.gov (United States)

    Novakovic, B.; Balaz, A.; Knezevic, Z.; Potocnik, M.

    2009-12-01

    A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  17. Computation of Asteroid Proper Elements on the Grid

    Directory of Open Access Journals (Sweden)

    Novaković, B.

    2009-12-01

    Full Text Available A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  18. Grid computing and e-science: a view from inside

    Directory of Open Access Journals (Sweden)

    Stefano Cozzini

    2008-06-01

    Full Text Available My intention is to analyze how, where and if grid computing technology is truly enabling a new way of doing science (so-called ‘e-science’. I will base my views on the experiences accumulated thus far in a number of scientific communities, which we have provided with the opportunity of using grid computing. I shall first define some basic terms and concepts and then discuss a number of specific cases in which the use of grid computing has actually made possible a new method for doing science. I will then present a case in which this did not result in a change in research methods. I will try to identify the reasons for these failures and analyze the future evolution of grid computing. I will conclude by introducing and commenting the concept of ‘cloud computing’, the approach offered and provided by major industrial actors (Google/IBM and Amazon being among the most important and what impact this technology might have on the world of research.

  19. Computation of asteroid proper elements on the Grid

    Directory of Open Access Journals (Sweden)

    Novaković B.

    2009-01-01

    Full Text Available A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  20. Performance Evaluation of a Mobile Wireless Computational Grid ...

    African Journals Online (AJOL)

    This work developed and simulated a mathematical model for a mobile wireless computational Grid architecture using networks of queuing theory. This was in order to evaluate the performance of theload-balancing three tier hierarchical configuration. The throughput and resource utilizationmetrics were measured and the ...

  1. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    Science.gov (United States)

    Gomez, Andres; Lara, Camilo; Kebschull, Udo

    2015-12-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machine Learning approach. We plan to implement the proposed framework as a software prototype that will be tested as a component of the ALICE Grid middleware.

  2. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    International Nuclear Information System (INIS)

    Gomez, Andres; Lara, Camilo; Kebschull, Udo

    2015-01-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machine Learning approach. We plan to implement the proposed framework as a software prototype that will be tested as a component of the ALICE Grid middleware. (paper)

  3. WEKA-G: Parallel data mining on computational grids

    Directory of Open Access Journals (Sweden)

    PIMENTA, A.

    2009-12-01

    Full Text Available Data mining is a technology that can extract useful information from large amounts of data. However, mining a database often requires a high computational power. To resolve this problem, this paper presents a tool (Weka-G, which runs in parallel algorithms used in the mining process data. As the environment for doing so, we use a computational grid by adding several features within a WAN.

  4. The LHC Computing Grid in the starting blocks

    CERN Multimedia

    Danielle Amy Venton

    2010-01-01

    As the Large Hadron Collider ramps up operations and breaks world records, it is an exciting time for everyone at CERN. To get the computing perspective, the Bulletin this week caught up with Ian Bird, leader of the Worldwide LHC Computing Grid (WLCG). He is confident that everything is ready for the first data.   The metallic globe illustrating the Worldwide LHC Computing GRID (WLCG) in the CERN Computing Centre. The Worldwide LHC Computing Grid (WLCG) collaboration has been in place since 2001 and for the past several years it has continually run the workloads for the experiments as part of their preparations for LHC data taking. So far, the numerous and massive simulations of the full chain of reconstruction and analysis software could only be carried out using Monte Carlo simulated data. Now, for the first time, the system is starting to work with real data and with many simultaneous users accessing them from all around the world. “During the 2009 large-scale computing challenge (...

  5. SAME4HPC: A Promising Approach in Building a Scalable and Mobile Environment for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Karthik, Rajasekar [ORNL

    2014-01-01

    In this paper, an architecture for building Scalable And Mobile Environment For High-Performance Computing with spatial capabilities called SAME4HPC is described using cutting-edge technologies and standards such as Node.js, HTML5, ECMAScript 6, and PostgreSQL 9.4. Mobile devices are increasingly becoming powerful enough to run high-performance apps. At the same time, there exist a significant number of low-end and older devices that rely heavily on the server or the cloud infrastructure to do the heavy lifting. Our architecture aims to support both of these types of devices to provide high-performance and rich user experience. A cloud infrastructure consisting of OpenStack with Ubuntu, GeoServer, and high-performance JavaScript frameworks are some of the key open-source and industry standard practices that has been adopted in this architecture.

  6. Infrastructure for Multiphysics Software Integration in High Performance Computing-Aided Science and Engineering

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, Michael T. [Illinois Rocstar LLC, Champaign, IL (United States); Safdari, Masoud [Illinois Rocstar LLC, Champaign, IL (United States); Kress, Jessica E. [Illinois Rocstar LLC, Champaign, IL (United States); Anderson, Michael J. [Illinois Rocstar LLC, Champaign, IL (United States); Horvath, Samantha [Illinois Rocstar LLC, Champaign, IL (United States); Brandyberry, Mark D. [Illinois Rocstar LLC, Champaign, IL (United States); Kim, Woohyun [Illinois Rocstar LLC, Champaign, IL (United States); Sarwal, Neil [Illinois Rocstar LLC, Champaign, IL (United States); Weisberg, Brian [Illinois Rocstar LLC, Champaign, IL (United States)

    2016-10-15

    The project described in this report constructed and exercised an innovative multiphysics coupling toolkit called the Illinois Rocstar MultiPhysics Application Coupling Toolkit (IMPACT). IMPACT is an open source, flexible, natively parallel infrastructure for coupling multiple uniphysics simulation codes into multiphysics computational systems. IMPACT works with codes written in several high-performance-computing (HPC) programming languages, and is designed from the beginning for HPC multiphysics code development. It is designed to be minimally invasive to the individual physics codes being integrated, and has few requirements on those physics codes for integration. The goal of IMPACT is to provide the support needed to enable coupling existing tools together in unique and innovative ways to produce powerful new multiphysics technologies without extensive modification and rewrite of the physics packages being integrated. There are three major outcomes from this project: 1) construction, testing, application, and open-source release of the IMPACT infrastructure, 2) production of example open-source multiphysics tools using IMPACT, and 3) identification and engagement of interested organizations in the tools and applications resulting from the project. This last outcome represents the incipient development of a user community and application echosystem being built using IMPACT. Multiphysics coupling standardization can only come from organizations working together to define needs and processes that span the space of necessary multiphysics outcomes, which Illinois Rocstar plans to continue driving toward. The IMPACT system, including source code, documentation, and test problems are all now available through the public gitHUB.org system to anyone interested in multiphysics code coupling. Many of the basic documents explaining use and architecture of IMPACT are also attached as appendices to this document. Online HTML documentation is available through the gitHUB site

  7. The extended RBAC model based on grid computing

    Institute of Scientific and Technical Information of China (English)

    CHEN Jian-gang; WANG Ru-chuan; WANG Hai-yan

    2006-01-01

    This article proposes the extended role-based access control (RBAC) model for solving dynamic and multidomain problems in grid computing, The formulated description of the model has been provided. The introduction of context and the mapping relations of context-to-role and context-to-permission help the model adapt to dynamic property in grid environment.The multidomain role inheritance relation by the authorization agent service realizes the multidomain authorization amongst the autonomy domain. A function has been proposed for solving the role inheritance conflict during the establishment of the multidomain role inheritance relation.

  8. CMS Monte Carlo production in the WLCG computing grid

    International Nuclear Information System (INIS)

    Hernandez, J M; Kreuzer, P; Hof, C; Khomitch, A; Mohapatra, A; Filippis, N D; Pompili, A; My, S; Abbrescia, M; Maggi, G; Donvito, G; Weirdt, S D; Maes, J; Mulders, P v; Villella, I; Wakefield, S; Guan, W; Fanfani, A; Evans, D; Flossdorf, A

    2008-01-01

    Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day

  9. Computing on the grid and in the cloud

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    "The results today are only possible because of the extraordinary performance of the accelerators, including the infrastructure, the experiments, and the Grid computing." These were the words of the CERN Director General Rolf Heuer when the observation of a new particle consistent with a Higgs Boson was revealed to the world on the 4th July 2012. The end result of the all investments made to build and operate the LHC is the data that are recorded and the knowledge that can be extracted. It is the role of the global computing infrastructure to unlock the value that is encapsulated in the data. This lecture provides a detailed overview of the Worldwide LHC Computing Grid, an international collaboration to distribute and analyse the LHC data.

  10. Computing on the grid and in the cloud

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    "The results today are only possible because of the extraordinary performance of the accelerators, including the infrastructure, the experiments, and the Grid computing." These were the words of the CERN Director General Rolf Heuer when the observation of a new particle consistent with a Higgs Boson was revealed to the world on the 4th July 2012. The end result of the all investments made to build and operate the LHC is the data that are recorded and the knowledge that can be extracted. It is the role of the global computing infrastructure to unlock the value that is encapsulated in the data. This lecture provides a detailed overview of the Worldwide LHC Computing Grid, an international collaboration to distribute and analyse the LHC data.

  11. FY06 NRL DoD High Performance Computing Modernization Program Annual Reports

    Science.gov (United States)

    2007-10-31

    nonequilibrium MD with isokinetic thermostatting. We have explored both the Green - Kubo and the mean-square displacement approaches to transport...codes. The flow solver was combined with adaptive re-meshing techniques for these transient problems with moving grids and was also integrated with the...simple one-dimensional eddy viscosity to a fully three-dimensional direct numerical simulation. Coupling between fluid and sediment phases varies from

  12. High performance computing and quantum trajectory method in CPU and GPU systems

    International Nuclear Information System (INIS)

    Wiśniewska, Joanna; Sawerwain, Marek; Leoński, Wiesław

    2015-01-01

    Nowadays, a dynamic progress in computational techniques allows for development of various methods, which offer significant speed-up of computations, especially those related to the problems of quantum optics and quantum computing. In this work, we propose computational solutions which re-implement the quantum trajectory method (QTM) algorithm in modern parallel computation environments in which multi-core CPUs and modern many-core GPUs can be used. In consequence, new computational routines are developed in more effective way than those applied in other commonly used packages, such as Quantum Optics Toolbox (QOT) for Matlab or QuTIP for Python

  13. WinSCP for Windows File Transfers | High-Performance Computing | NREL

    Science.gov (United States)

    WinSCP for Windows File Transfers WinSCP for Windows File Transfers WinSCP for can used to securely transfer files between your local computer running Microsoft Windows and a remote computer running Linux

  14. DZero data-intensive computing on the Open Science Grid

    International Nuclear Information System (INIS)

    Abbott, B.; Baranovski, A.; Diesburg, M.; Garzoglio, G.; Kurca, T.; Mhashilkar, P.

    2007-01-01

    High energy physics experiments periodically reprocess data, in order to take advantage of improved understanding of the detector and the data processing code. Between February and May 2007, the DZero experiment has reprocessed a substantial fraction of its dataset. This consists of half a billion events, corresponding to about 100 TB of data, organized in 300,000 files. The activity utilized resources from sites around the world, including a dozen sites participating to the Open Science Grid consortium (OSG). About 1,500 jobs were run every day across the OSG, consuming and producing hundreds of Gigabytes of data. Access to OSG computing and storage resources was coordinated by the SAM-Grid system. This system organized job access to a complex topology of data queues and job scheduling to clusters, using a SAM-Grid to OSG job forwarding infrastructure. For the first time in the lifetime of the experiment, a data intensive production activity was managed on a general purpose grid, such as OSG. This paper describes the implications of using OSG, where all resources are granted following an opportunistic model, the challenges of operating a data intensive activity over such large computing infrastructure, and the lessons learned throughout the project

  15. DZero data-intensive computing on the Open Science Grid

    International Nuclear Information System (INIS)

    Abbott, B; Baranovski, A; Diesburg, M; Garzoglio, G; Mhashilkar, P; Kurca, T

    2008-01-01

    High energy physics experiments periodically reprocess data, in order to take advantage of improved understanding of the detector and the data processing code. Between February and May 2007, the DZero experiment has reprocessed a substantial fraction of its dataset. This consists of half a billion events, corresponding to about 100 TB of data, organized in 300,000 files. The activity utilized resources from sites around the world, including a dozen sites participating to the Open Science Grid consortium (OSG). About 1,500 jobs were run every day across the OSG, consuming and producing hundreds of Gigabytes of data. Access to OSG computing and storage resources was coordinated by the SAM-Grid system. This system organized job access to a complex topology of data queues and job scheduling to clusters, using a SAM-Grid to OSG job forwarding infrastructure. For the first time in the lifetime of the experiment, a data intensive production activity was managed on a general purpose grid, such as OSG. This paper describes the implications of using OSG, where all resources are granted following an opportunistic model, the challenges of operating a data intensive activity over such large computing infrastructure, and the lessons learned throughout the project

  16. The computer program LIAR for the simulation and modeling of high performance linacs

    International Nuclear Information System (INIS)

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.O.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-07-01

    High performance linear accelerators are the central components of the proposed next generation of linear colliders. They must provide acceleration of up to 750 GeV per beam while maintaining small normalized emittances. Standard simulation programs, mainly developed for storage rings, did not meet the specific requirements for high performance linacs with high bunch charges and strong wakefields. The authors present the program. LIAR (LInear Accelerator Research code) that includes single and multi-bunch wakefield effects, a 6D coupled beam description, specific optimization algorithms and other advanced features. LIAR has been applied to and checked against the existing Stanford Linear Collider (SLC), the linacs of the proposed Next Linear Collider (NLC) and the proposed Linac Coherent Light Source (LCLS) at SLAC. Its modular structure allows easy extension for different purposes. The program is available for UNIX workstations and Windows PC's

  17. Energy Efficiency Evaluation and Benchmarking of AFRL’s Condor High Performance Computer

    Science.gov (United States)

    2011-08-01

    PlayStation 3 nodes executing the HPL benchmark. When idle, the two PS3s consume 188.49 W on average. At peak HPL performance, the nodes draw an average of...AUG 2011 2. REPORT TYPE CONFERENCE PAPER (Post Print) 3. DATES COVERED (From - To) JAN 2011 – JUN 2011 4 . TITLE AND SUBTITLE ENERGY EFFICIENCY...the High Performance LINPACK (HPL) benchmark while also measuring the energy consumed to achieve such performance. Supercomputers are ranked by

  18. A high-performance data acquisition system for computer-based multichannel analyzer

    International Nuclear Information System (INIS)

    Zhou Xinzhi; Bai Rongsheng; Wen Liangbi; Huang Yanwen

    1996-01-01

    A high-performance data acquisition system applied in the multichannel analyzer is designed with single-chip microcomputer system. The paper proposes the principle and the method of realizing the simultaneous data acquisition, the data pre-processing, and the fast bidirectional data transfer by means of direct memory access based on dual-port RAM as well. The measurement for dead or live time of ADC system can also be implemented efficiently by using it

  19. Operating the worldwide LHC computing grid: current and future challenges

    International Nuclear Information System (INIS)

    Molina, J Flix; Forti, A; Girone, M; Sciaba, A

    2014-01-01

    The Wordwide LHC Computing Grid project (WLCG) provides the computing and storage resources required by the LHC collaborations to store, process and analyse their data. It includes almost 200,000 CPU cores, 200 PB of disk storage and 200 PB of tape storage distributed among more than 150 sites. The WLCG operations team is responsible for several essential tasks, such as the coordination of testing and deployment of Grid middleware and services, communication with the experiments and the sites, followup and resolution of operational issues and medium/long term planning. In 2012 WLCG critically reviewed all operational procedures and restructured the organisation of the operations team as a more coherent effort in order to improve its efficiency. In this paper we describe how the new organisation works, its recent successes and the changes to be implemented during the long LHC shutdown in preparation for the LHC Run 2.

  20. Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Kozacik, Stephen [EM Photonics, Inc., Newark, DE (United States)

    2017-05-15

    Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.

  1. FLASHRAD: A 3D Rad Hard Memory Module For High Performance Space Computers, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The computing capabilities of onboard spacecraft are a major limiting factor for accomplishing many classes of future missions. Although technology development...

  2. Very High-Performance Embedded Computing Will Allow Ambitious Space Science Investigation

    National Research Council Canada - National Science Library

    Pignol, Michel

    2005-01-01

    .... developed on radiation tolerant technologies. Unfortunately, the microprocessors today available on such technologies have the computing throughput which was available about 10 years ago on the commercial market...

  3. High Performance Computing and Storage Requirements for Biological and Environmental Research Target 2017

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Wasserman, Harvey [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)

    2013-05-01

    The National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,500 users working on some 650 projects that involve nearly 600 codes in a wide variety of scientific disciplines. In addition to large-­scale computing and storage resources NERSC provides support and expertise that help scientists make efficient use of its systems. The latest review revealed several key requirements, in addition to achieving its goal of characterizing BER computing and storage needs.

  4. Monte Carlo simulation with the Gate software using grid computing

    International Nuclear Information System (INIS)

    Reuillon, R.; Hill, D.R.C.; Gouinaud, C.; El Bitar, Z.; Breton, V.; Buvat, I.

    2009-03-01

    Monte Carlo simulations are widely used in emission tomography, for protocol optimization, design of processing or data analysis methods, tomographic reconstruction, or tomograph design optimization. Monte Carlo simulations needing many replicates to obtain good statistical results can be easily executed in parallel using the 'Multiple Replications In Parallel' approach. However, several precautions have to be taken in the generation of the parallel streams of pseudo-random numbers. In this paper, we present the distribution of Monte Carlo simulations performed with the GATE software using local clusters and grid computing. We obtained very convincing results with this large medical application, thanks to the EGEE Grid (Enabling Grid for E-science), achieving in one week computations that could have taken more than 3 years of processing on a single computer. This work has been achieved thanks to a generic object-oriented toolbox called DistMe which we designed to automate this kind of parallelization for Monte Carlo simulations. This toolbox, written in Java is freely available on SourceForge and helped to ensure a rigorous distribution of pseudo-random number streams. It is based on the use of a documented XML format for random numbers generators statuses. (authors)

  5. The Model of the Software Running on a Computer Equipment Hardware Included in the Grid network

    Directory of Open Access Journals (Sweden)

    T. A. Mityushkina

    2012-12-01

    Full Text Available A new approach to building a cloud computing environment using Grid networks is proposed in this paper. The authors describe the functional capabilities, algorithm, model of software running on a computer equipment hardware included in the Grid network, that will allow to implement cloud computing environment using Grid technologies.

  6. gLExec: gluing grid computing to the Unix world

    Science.gov (United States)

    Groep, D.; Koeroo, O.; Venekamp, G.

    2008-07-01

    The majority of compute resources in todays scientific grids are based on Unix and Unix-like operating systems. In this world, user and user-group management are based around the concepts of a numeric 'user ID' and 'group ID' that are local to the resource. In contrast, grid concepts of user and group management are centered around globally assigned identifiers and VO membership, structures that are independent of any specific resource. At the fabric boundary, these 'grid identities' have to be translated to Unix user IDs. New job submission methodologies, such as job-execution web services, community-deployed local schedulers, and the late binding of user jobs in a grid-wide overlay network of 'pilot jobs', push this fabric boundary ever further down into the resource. gLExec, a light-weight (and thereby auditable) credential mapping and authorization system, addresses these issues. It can be run both on fabric boundary, as part of an execution web service, and on the worker node in a late-binding scenario. In this contribution we describe the rationale for gLExec, how it interacts with the site authorization and credential mapping frameworks such as LCAS, LCMAPS and GUMS, and how it can be used to improve site control and traceability in a pilot-job system.

  7. gLExec: gluing grid computing to the Unix world

    International Nuclear Information System (INIS)

    Groep, D; Koeroo, O; Venekamp, G

    2008-01-01

    The majority of compute resources in todays scientific grids are based on Unix and Unix-like operating systems. In this world, user and user-group management are based around the concepts of a numeric 'user ID' and 'group ID' that are local to the resource. In contrast, grid concepts of user and group management are centered around globally assigned identifiers and VO membership, structures that are independent of any specific resource. At the fabric boundary, these 'grid identities' have to be translated to Unix user IDs. New job submission methodologies, such as job-execution web services, community-deployed local schedulers, and the late binding of user jobs in a grid-wide overlay network of 'pilot jobs', push this fabric boundary ever further down into the resource. gLExec, a light-weight (and thereby auditable) credential mapping and authorization system, addresses these issues. It can be run both on fabric boundary, as part of an execution web service, and on the worker node in a late-binding scenario. In this contribution we describe the rationale for gLExec, how it interacts with the site authorization and credential mapping frameworks such as LCAS, LCMAPS and GUMS, and how it can be used to improve site control and traceability in a pilot-job system

  8. National Fusion Collaboratory: Grid Computing for Simulations and Experiments

    Science.gov (United States)

    Greenwald, Martin

    2004-05-01

    The National Fusion Collaboratory Project is creating a computational grid designed to advance scientific understanding and innovation in magnetic fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling and allowing more efficient use of experimental facilities. The philosophy of FusionGrid is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as network available services, easily used by the fusion scientist. In such an environment, access to services is stressed rather than portability. By building on a foundation of established computer science toolkits, deployment time can be minimized. These services all share the same basic infrastructure that allows for secure authentication and resource authorization which allows stakeholders to control their own resources such as computers, data and experiments. Code developers can control intellectual property, and fair use of shared resources can be demonstrated and controlled. A key goal is to shield scientific users from the implementation details such that transparency and ease-of-use are maximized. The first FusionGrid service deployed was the TRANSP code, a widely used tool for transport analysis. Tools for run preparation, submission, monitoring and management have been developed and shared among a wide user base. This approach saves user sites from the laborious effort of maintaining such a large and complex code while at the same time reducing the burden on the development team by avoiding the need to support a large number of heterogeneous installations. Shared visualization and A/V tools are being developed and deployed to enhance long-distance collaborations. These include desktop versions of the Access Grid, a highly capable multi-point remote conferencing tool and capabilities for sharing displays and analysis tools over local and wide-area networks.

  9. FY 2000 Blue Book: High Performance Computing and Communications: Information Technology Frontiers for a New Millennium

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — As we near the dawn of a new millennium, advances made possible by computing, information, and communications research and development R and D ? once barely...

  10. FY 1994 Blue Book: High Performance Computing and Communications: Toward a National Information Infrastructure

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — government and industry that advanced computer and telecommunications technologies could provide huge benefits throughout the research community and the entire U.S....

  11. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    KAUST Repository

    Downes, Turlough P.; Roller, Sabine P.; Seitsonen, Ari Paavo; Valcke, Sophie; Keyes, David E.; Sawley, Marie Christine; Schulthess, Thomas C.; Shalf, John M.

    2013-01-01

    and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators

  12. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    Directory of Open Access Journals (Sweden)

    David K Brown

    Full Text Available Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS, a workflow management system and web interface for high performance computing (HPC. JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.

  13. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    Science.gov (United States)

    Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.

  14. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing

    Science.gov (United States)

    Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450

  15. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    Energy Technology Data Exchange (ETDEWEB)

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  16. High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing (HPC) Environment: Packet-Level Analysis

    Science.gov (United States)

    2015-09-01

    individual fragments using the hash-based method. In general, fragments 6 appear in order and relatively close to each other in the file. A fragment...data product derived from the data model is shown in Fig. 5, a Google Earth12 Keyhole Markup Language (KML) file. This product includes aggregate...System BLOb binary large object FPGA field-programmable gate array HPC high-performance computing IP Internet Protocol KML Keyhole Markup Language

  17. The Adoption of Grid Computing Technology by Organizations: A Quantitative Study Using Technology Acceptance Model

    Science.gov (United States)

    Udoh, Emmanuel E.

    2010-01-01

    Advances in grid technology have enabled some organizations to harness enormous computational power on demand. However, the prediction of widespread adoption of the grid technology has not materialized despite the obvious grid advantages. This situation has encouraged intense efforts to close the research gap in the grid adoption process. In this…

  18. Implementation of the Principal Component Analysis onto High-Performance Computer Facilities for Hyperspectral Dimensionality Reduction: Results and Comparisons

    Directory of Open Access Journals (Sweden)

    Ernestina Martel

    2018-06-01

    Full Text Available Dimensionality reduction represents a critical preprocessing step in order to increase the efficiency and the performance of many hyperspectral imaging algorithms. However, dimensionality reduction algorithms, such as the Principal Component Analysis (PCA, suffer from their computationally demanding nature, becoming advisable for their implementation onto high-performance computer architectures for applications under strict latency constraints. This work presents the implementation of the PCA algorithm onto two different high-performance devices, namely, an NVIDIA Graphics Processing Unit (GPU and a Kalray manycore, uncovering a highly valuable set of tips and tricks in order to take full advantage of the inherent parallelism of these high-performance computing platforms, and hence, reducing the time that is required to process a given hyperspectral image. Moreover, the achieved results obtained with different hyperspectral images have been compared with the ones that were obtained with a field programmable gate array (FPGA-based implementation of the PCA algorithm that has been recently published, providing, for the first time in the literature, a comprehensive analysis in order to highlight the pros and cons of each option.

  19. High Performance Computation of a Jet in Crossflow by Lattice Boltzmann Based Parallel Direct Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Jiang Lei

    2015-01-01

    Full Text Available Direct numerical simulation (DNS of a round jet in crossflow based on lattice Boltzmann method (LBM is carried out on multi-GPU cluster. Data parallel SIMT (single instruction multiple thread characteristic of GPU matches the parallelism of LBM well, which leads to the high efficiency of GPU on the LBM solver. With present GPU settings (6 Nvidia Tesla K20M, the present DNS simulation can be completed in several hours. A grid system of 1.5 × 108 is adopted and largest jet Reynolds number reaches 3000. The jet-to-free-stream velocity ratio is set as 3.3. The jet is orthogonal to the mainstream flow direction. The validated code shows good agreement with experiments. Vortical structures of CRVP, shear-layer vortices and horseshoe vortices, are presented and analyzed based on velocity fields and vorticity distributions. Turbulent statistical quantities of Reynolds stress are also displayed. Coherent structures are revealed in a very fine resolution based on the second invariant of the velocity gradients.

  20. Implementation of the Two-Point Angular Correlation Function on a High-Performance Reconfigurable Computer

    Directory of Open Access Journals (Sweden)

    Volodymyr V. Kindratenko

    2009-01-01

    Full Text Available We present a parallel implementation of an algorithm for calculating the two-point angular correlation function as applied in the field of computational cosmology. The algorithm has been specifically developed for a reconfigurable computer. Our implementation utilizes a microprocessor and two reconfigurable processors on a dual-MAP SRC-6 system. The two reconfigurable processors are used as two application-specific co-processors. Two independent computational kernels are simultaneously executed on the reconfigurable processors while data pre-fetching from disk and initial data pre-processing are executed on the microprocessor. The overall end-to-end algorithm execution speedup achieved by this implementation is over 90× as compared to a sequential implementation of the algorithm executed on a single 2.8 GHz Intel Xeon microprocessor.

  1. Computer-Aided Chemical Product Design Framework: Design of High Performance and Environmentally Friendly Refrigerants

    DEFF Research Database (Denmark)

    Cignitti, Stefano; Zhang, Lei; Gani, Rafiqul

    properties and needs should carefully be selected for a given heat pump cycle to ensure that an optimum refrigerant is found? How can cycle performance and environmental criteria be integrated at the product design stage and not in post-design analysis? Computer-aided product design methods enable...... the possibility of designing novel molecules, mixtures and blends, such as refrigerants through a systematic framework (Cignitti et al., 2015; Yunus et al., 2014). In this presentation a computer-aided framework is presented for chemical product design through mathematical optimization. Here, molecules, mixtures...... and blends, are systematically designed through a decomposition based solution method. Given a problem definition, computer-aided molecular design (CAMD) problem is defined, which is formulated into a mixed integer nonlinear program (MINLP). The decomposed solution method then sequentially divides the MINLP...

  2. A Protocol for Provably Secure Authentication of a Tiny Entity to a High Performance Computing One

    Directory of Open Access Journals (Sweden)

    Siniša Tomović

    2016-01-01

    Full Text Available The problem of developing authentication protocols dedicated to a specific scenario where an entity with limited computational capabilities should prove the identity to a computationally powerful Verifier is addressed. An authentication protocol suitable for the considered scenario which jointly employs the learning parity with noise (LPN problem and a paradigm of random selection is proposed. It is shown that the proposed protocol is secure against active attacking scenarios and so called GRS man-in-the-middle (MIM attacking scenarios. In comparison with the related previously reported authentication protocols the proposed one provides reduction of the implementation complexity and at least the same level of the cryptographic security.

  3. High-performance secure multi-party computation for data mining applications

    DEFF Research Database (Denmark)

    Bogdanov, Dan; Niitsoo, Margus; Toft, Tomas

    2012-01-01

    Secure multi-party computation (MPC) is a technique well suited for privacy-preserving data mining. Even with the recent progress in two-party computation techniques such as fully homomorphic encryption, general MPC remains relevant as it has shown promising performance metrics in real...... operations such as multiplication and comparison. Secondly, the confidential processing of financial data requires the use of more complex primitives, including a secure division operation. This paper describes new protocols in the Sharemind model for secure multiplication, share conversion, equality, bit...

  4. Dynamic stability calculations for power grids employing a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, K

    1982-06-01

    The aim of dynamic contingency calculations in power systems is to estimate the effects of assumed disturbances, such as loss of generation. Due to the large dimensions of the problem these simulations require considerable computing time and costs, to the effect that they are at present only used in a planning state but not for routine checks in power control stations. In view of the homogeneity of the problem, where a multitude of equal generator models, having different parameters, are to be integrated simultaneously, the use of a parallel computer looks very attractive. The results of this study employing a prototype parallel computer (SMS 201) are presented. It consists of up to 128 equal microcomputers bus-connected to a control computer. Each of the modules is programmed to simulate a node of the power grid. Generators with their associated control are represented by models of 13 states each. Passive nodes are complemented by 'phantom'-generators, so that the whole power grid is homogenous, thus removing the need for load-flow-iterations. Programming of microcomputers is essentially performed in FORTRAN.

  5. The Use of High Performance Computing (HPC) to Strengthen the Development of Army Systems

    Science.gov (United States)

    2011-11-01

    changes in what the warfighter wants – in the middle of an acquisition cycle such changes create havoc in terms of delays, recycling of the research...A little bit later the first personal computers (PCs) came on the market, mostly as curiosities . The operating systems were either ms-dos or cp/m

  6. A High Performance Computing Framework for Physics-based Modeling and Simulation of Military Ground Vehicles

    Science.gov (United States)

    2011-03-25

    cluster. The co-processing idea is the enabler of the heterogeneous computing concept advertised recently as the paradigm capable of delivering exascale ...Petascale to Exascale : Extending Intel’s HPC Commitment: http://download.intel.com/pressroom/archive/reference/ISC_2010_Skaugen_keynote.pdf in

  7. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    Science.gov (United States)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  8. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    International Nuclear Information System (INIS)

    Hadjidoukas, P.E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-01-01

    We present Π4U, 1 an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow

  9. Achieving high performance in numerical computations on RISC workstations and parallel systems

    Energy Technology Data Exchange (ETDEWEB)

    Goedecker, S. [Max-Planck Inst. for Solid State Research, Stuttgart (Germany); Hoisie, A. [Los Alamos National Lab., NM (United States)

    1997-08-20

    The nominal peak speeds of both serial and parallel computers is raising rapidly. At the same time however it is becoming increasingly difficult to get out a significant fraction of this high peak speed from modern computer architectures. In this tutorial the authors give the scientists and engineers involved in numerically demanding calculations and simulations the necessary basic knowledge to write reasonably efficient programs. The basic principles are rather simple and the possible rewards large. Writing a program by taking into account optimization techniques related to the computer architecture can significantly speedup your program, often by factors of 10--100. As such, optimizing a program can for instance be a much better solution than buying a faster computer. If a few basic optimization principles are applied during program development, the additional time needed for obtaining an efficient program is practically negligible. In-depth optimization is usually only needed for a few subroutines or kernels and the effort involved is therefore also acceptable.

  10. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    Energy Technology Data Exchange (ETDEWEB)

    Hadjidoukas, P.E.; Angelikopoulos, P. [Computational Science and Engineering Laboratory, ETH Zürich, CH-8092 (Switzerland); Papadimitriou, C. [Department of Mechanical Engineering, University of Thessaly, GR-38334 Volos (Greece); Koumoutsakos, P., E-mail: petros@ethz.ch [Computational Science and Engineering Laboratory, ETH Zürich, CH-8092 (Switzerland)

    2015-03-01

    We present Π4U,{sup 1} an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  11. High Performance Parallel Processing Project: Industrial computing initiative. Progress reports for fiscal year 1995

    Energy Technology Data Exchange (ETDEWEB)

    Koniges, A.

    1996-02-09

    This project is a package of 11 individual CRADA`s plus hardware. This innovative project established a three-year multi-party collaboration that is significantly accelerating the availability of commercial massively parallel processing computing software technology to U.S. government, academic, and industrial end-users. This report contains individual presentations from nine principal investigators along with overall program information.

  12. Department of Defense High Performance Computing Modernization Program. 2008 Annual Report

    Science.gov (United States)

    2009-04-01

    Environment, Doug Post and the CREATE Team (K. Hill, D. van Veldhuizen , G. Zelinski, AFRL; S. Arevalo, T. Blacker, D. Fisher, P. Genalis, A. Harris, M...RF Antenna Group, David van Veldhuizen Computational Research and Engineering Acquisition Tools and Environments (CREATE), David Fisher and

  13. A high performance computing framework for physics-based modeling and simulation of military ground vehicles

    Science.gov (United States)

    Negrut, Dan; Lamb, David; Gorsich, David

    2011-06-01

    This paper describes a software infrastructure made up of tools and libraries designed to assist developers in implementing computational dynamics applications running on heterogeneous and distributed computing environments. Together, these tools and libraries compose a so called Heterogeneous Computing Template (HCT). The heterogeneous and distributed computing hardware infrastructure is assumed herein to be made up of a combination of CPUs and Graphics Processing Units (GPUs). The computational dynamics applications targeted to execute on such a hardware topology include many-body dynamics, smoothed-particle hydrodynamics (SPH) fluid simulation, and fluid-solid interaction analysis. The underlying theme of the solution approach embraced by HCT is that of partitioning the domain of interest into a number of subdomains that are each managed by a separate core/accelerator (CPU/GPU) pair. Five components at the core of HCT enable the envisioned distributed computing approach to large-scale dynamical system simulation: (a) the ability to partition the problem according to the one-to-one mapping; i.e., spatial subdivision, discussed above (pre-processing); (b) a protocol for passing data between any two co-processors; (c) algorithms for element proximity computation; and (d) the ability to carry out post-processing in a distributed fashion. In this contribution the components (a) and (b) of the HCT are demonstrated via the example of the Discrete Element Method (DEM) for rigid body dynamics with friction and contact. The collision detection task required in frictional-contact dynamics (task (c) above), is shown to benefit on the GPU of a two order of magnitude gain in efficiency when compared to traditional sequential implementations. Note: Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not imply its endorsement, recommendation, or favoring by the United States Army. The views and

  14. High-performance parallel computing in the classroom using the public goods game as an example

    Science.gov (United States)

    Perc, Matjaž

    2017-07-01

    The use of computers in statistical physics is common because the sheer number of equations that describe the behaviour of an entire system particle by particle often makes it impossible to solve them exactly. Monte Carlo methods form a particularly important class of numerical methods for solving problems in statistical physics. Although these methods are simple in principle, their proper use requires a good command of statistical mechanics, as well as considerable computational resources. The aim of this paper is to demonstrate how the usage of widely accessible graphics cards on personal computers can elevate the computing power in Monte Carlo simulations by orders of magnitude, thus allowing live classroom demonstration of phenomena that would otherwise be out of reach. As an example, we use the public goods game on a square lattice where two strategies compete for common resources in a social dilemma situation. We show that the second-order phase transition to an absorbing phase in the system belongs to the directed percolation universality class, and we compare the time needed to arrive at this result by means of the main processor and by means of a suitable graphics card. Parallel computing on graphics processing units has been developed actively during the last decade, to the point where today the learning curve for entry is anything but steep for those familiar with programming. The subject is thus ripe for inclusion in graduate and advanced undergraduate curricula, and we hope that this paper will facilitate this process in the realm of physics education. To that end, we provide a documented source code for an easy reproduction of presented results and for further development of Monte Carlo simulations of similar systems.

  15. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens

    International Nuclear Information System (INIS)

    Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D.; Volz, Kerstin

    2017-01-01

    Highlights: • We present STEMsalabim, a modern implementation of the multislice algorithm for simulation of STEM images. • Our package is highly parallelizable on high-performance computing clusters, combining shared and distributed memory architectures. • With STEMsalabim, computationally and memory expensive STEM image simulations can be carried out within reasonable time. - Abstract: We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.

  16. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens

    Energy Technology Data Exchange (ETDEWEB)

    Oelerich, Jan Oliver, E-mail: jan.oliver.oelerich@physik.uni-marburg.de; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D.; Volz, Kerstin

    2017-06-15

    Highlights: • We present STEMsalabim, a modern implementation of the multislice algorithm for simulation of STEM images. • Our package is highly parallelizable on high-performance computing clusters, combining shared and distributed memory architectures. • With STEMsalabim, computationally and memory expensive STEM image simulations can be carried out within reasonable time. - Abstract: We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.

  17. Contributing to the design of run-time systems dedicated to high performance computing

    International Nuclear Information System (INIS)

    Perache, M.

    2006-10-01

    In the field of intensive scientific computing, the quest for performance has to face the increasing complexity of parallel architectures. Nowadays, these machines exhibit a deep memory hierarchy which complicates the design of efficient parallel applications. This thesis proposes a programming environment allowing to design efficient parallel programs on top of clusters of multi-processors. It features a programming model centered around collective communications and synchronizations, and provides load balancing facilities. The programming interface, named MPC, provides high level paradigms which are optimized according to the underlying architecture. The environment is fully functional and used within the CEA/DAM (TERANOVA) computing center. The evaluations presented in this document confirm the relevance of our approach. (author)

  18. Open source acceleration of wave optics simulations on energy efficient high-performance computing platforms

    Science.gov (United States)

    Beck, Jeffrey; Bos, Jeremy P.

    2017-05-01

    We compare several modifications to the open-source wave optics package, WavePy, intended to improve execution time. Specifically, we compare the relative performance of the Intel MKL, a CPU based OpenCV distribution, and GPU-based version. Performance is compared between distributions both on the same compute platform and between a fully-featured computing workstation and the NVIDIA Jetson TX1 platform. Comparisons are drawn in terms of both execution time and power consumption. We have found that substituting the Fast Fourier Transform operation from OpenCV provides a marked improvement on all platforms. In addition, we show that embedded platforms offer some possibility for extensive improvement in terms of efficiency compared to a fully featured workstation.

  19. Heterogeneous Gpu&Cpu Cluster For High Performance Computing In Cryptography

    Directory of Open Access Journals (Sweden)

    Michał Marks

    2012-01-01

    Full Text Available This paper addresses issues associated with distributed computing systems andthe application of mixed GPU&CPU technology to data encryption and decryptionalgorithms. We describe a heterogenous cluster HGCC formed by twotypes of nodes: Intel processor with NVIDIA graphics processing unit and AMDprocessor with AMD graphics processing unit (formerly ATI, and a novel softwareframework that hides the heterogeneity of our cluster and provides toolsfor solving complex scientific and engineering problems. Finally, we present theresults of numerical experiments. The considered case study is concerned withparallel implementations of selected cryptanalysis algorithms. The main goal ofthe paper is to show the wide applicability of the GPU&CPU technology tolarge scale computation and data processing.

  20. LIAR: A COMPUTER PROGRAM FOR THE SIMULATION AND MODELING OF HIGH PERFORMANCE LINACS

    International Nuclear Information System (INIS)

    Adolphsen, Chris

    2003-01-01

    The computer program LIAR (''LInear Accelerator Research code'') is a numerical simulation and tracking program for linear colliders. The LIAR project was started at SLAC in August 1995 in order to provide a computing and simulation tool that specifically addresses the needs of high energy linear colliders. LIAR is designed to be used for a variety of different linear accelerators. It has been applied for and checked against the existing Stanford Linear Collider (SLC) as well as the linacs of the proposed Next Linear Collider (NLC) and the proposed Linac Coherent Light Source (LCLS). The program includes wakefield effects, a 4D coupled beam description, specific optimization algorithms and other advanced features. We describe the most important concepts and highlights of the program. After having presented the LIAR program at the LINAC96 and the PAC97 conferences, we do now introduce it to the European particle accelerator community

  1. Quo vadis: Hydrologic inverse analyses using high-performance computing and a D-Wave quantum annealer

    Science.gov (United States)

    O'Malley, D.; Vesselinov, V. V.

    2017-12-01

    Classical microprocessors have had a dramatic impact on hydrology for decades, due largely to the exponential growth in computing power predicted by Moore's law. However, this growth is not expected to continue indefinitely and has already begun to slow. Quantum computing is an emerging alternative to classical microprocessors. Here, we demonstrated cutting edge inverse model analyses utilizing some of the best available resources in both worlds: high-performance classical computing and a D-Wave quantum annealer. The classical high-performance computing resources are utilized to build an advanced numerical model that assimilates data from O(10^5) observations, including water levels, drawdowns, and contaminant concentrations. The developed model accurately reproduces the hydrologic conditions at a Los Alamos National Laboratory contamination site, and can be leveraged to inform decision-making about site remediation. We demonstrate the use of a D-Wave 2X quantum annealer to solve hydrologic inverse problems. This work can be seen as an early step in quantum-computational hydrology. We compare and contrast our results with an early inverse approach in classical-computational hydrology that is comparable to the approach we use with quantum annealing. Our results show that quantum annealing can be useful for identifying regions of high and low permeability within an aquifer. While the problems we consider are small-scale compared to the problems that can be solved with modern classical computers, they are large compared to the problems that could be solved with early classical CPUs. Further, the binary nature of the high/low permeability problem makes it well-suited to quantum annealing, but challenging for classical computers.

  2. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 1, Issue 2

    Science.gov (United States)

    2011-01-01

    area and the researchers working on these projects. Also inside: news from the AHPCRC consortium partners at Morgan State University and the NASA ...Computing Research Center is provided by the supercomputing and research facilities at Stanford University and at the NASA Ames Research Center at...atomic and molecular level, he said. He noted that “every general would like to have” a Star Trek -like holodeck, where holographic avatars could

  3. Tracking the NGS revolution: managing life science research on shared high-performance computing clusters.

    Science.gov (United States)

    Dahlö, Martin; Scofield, Douglas G; Schaal, Wesley; Spjuth, Ola

    2018-05-01

    Next-generation sequencing (NGS) has transformed the life sciences, and many research groups are newly dependent upon computer clusters to store and analyze large datasets. This creates challenges for e-infrastructures accustomed to hosting computationally mature research in other sciences. Using data gathered from our own clusters at UPPMAX computing center at Uppsala University, Sweden, where core hour usage of ∼800 NGS and ∼200 non-NGS projects is now similar, we compare and contrast the growth, administrative burden, and cluster usage of NGS projects with projects from other sciences. The number of NGS projects has grown rapidly since 2010, with growth driven by entry of new research groups. Storage used by NGS projects has grown more rapidly since 2013 and is now limited by disk capacity. NGS users submit nearly twice as many support tickets per user, and 11 more tools are installed each month for NGS projects than for non-NGS projects. We developed usage and efficiency metrics and show that computing jobs for NGS projects use more RAM than non-NGS projects, are more variable in core usage, and rarely span multiple nodes. NGS jobs use booked resources less efficiently for a variety of reasons. Active monitoring can improve this somewhat. Hosting NGS projects imposes a large administrative burden at UPPMAX due to large numbers of inexperienced users and diverse and rapidly evolving research areas. We provide a set of recommendations for e-infrastructures that host NGS research projects. We provide anonymized versions of our storage, job, and efficiency databases.

  4. Air Force Science & Technology Issues & Opportunities Regarding High Performance Embedded Computing

    Science.gov (United States)

    2009-09-23

    price-performance advantage include: large scale simulations of neuromorphic computing models GOTCHA radar video SAR for wide area persistent...the handcuffs were not for me and that the military had so far got … Neuromorphic example: Robust recognition of occluded text Gotcha SAR PCID Image...Architecture 16 cores / chip 10 x 10 stacks / board50 chips / stack EDRAM AFPGA EDRAM AFPGA EDRAM AFPGA EDRAM AFPGA EDRAM AFPGA EDRAM AFPGA EDRAM AFPGA EDRAM

  5. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Edward S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Orr, Laurel J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thompson, Kyle R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  6. Tracking the NGS revolution: managing life science research on shared high-performance computing clusters

    Science.gov (United States)

    2018-01-01

    Abstract Background Next-generation sequencing (NGS) has transformed the life sciences, and many research groups are newly dependent upon computer clusters to store and analyze large datasets. This creates challenges for e-infrastructures accustomed to hosting computationally mature research in other sciences. Using data gathered from our own clusters at UPPMAX computing center at Uppsala University, Sweden, where core hour usage of ∼800 NGS and ∼200 non-NGS projects is now similar, we compare and contrast the growth, administrative burden, and cluster usage of NGS projects with projects from other sciences. Results The number of NGS projects has grown rapidly since 2010, with growth driven by entry of new research groups. Storage used by NGS projects has grown more rapidly since 2013 and is now limited by disk capacity. NGS users submit nearly twice as many support tickets per user, and 11 more tools are installed each month for NGS projects than for non-NGS projects. We developed usage and efficiency metrics and show that computing jobs for NGS projects use more RAM than non-NGS projects, are more variable in core usage, and rarely span multiple nodes. NGS jobs use booked resources less efficiently for a variety of reasons. Active monitoring can improve this somewhat. Conclusions Hosting NGS projects imposes a large administrative burden at UPPMAX due to large numbers of inexperienced users and diverse and rapidly evolving research areas. We provide a set of recommendations for e-infrastructures that host NGS research projects. We provide anonymized versions of our storage, job, and efficiency databases. PMID:29659792

  7. High-Throughput Computing on High-Performance Platforms: A Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Matteo, Turilli [Rutgers University; Angius, Alessio [Rutgers University; Oral, H Sarp [ORNL; De, K [University of Texas at Arlington; Klimentov, A [Brookhaven National Laboratory (BNL); Wells, Jack C. [ORNL; Jha, S [Rutgers University

    2017-10-01

    The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan -- a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i) a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner.

  8. FPGA hardware acceleration for high performance neutron transport computation based on agent methodology - 318

    International Nuclear Information System (INIS)

    Shanjie, Xiao; Tatjana, Jevremovic

    2010-01-01

    The accurate, detailed and 3D neutron transport analysis for Gen-IV reactors is still time-consuming regardless of advanced computational hardware available in developed countries. This paper introduces a new concept in addressing the computational time while persevering the detailed and accurate modeling; a specifically designed FPGA co-processor accelerates robust AGENT methodology for complex reactor geometries. For the first time this approach is applied to accelerate the neutronics analysis. The AGENT methodology solves neutron transport equation using the method of characteristics. The AGENT methodology performance was carefully analyzed before the hardware design based on the FPGA co-processor was adopted. The most time-consuming kernel part is then transplanted into the FPGA co-processor. The FPGA co-processor is designed with data flow-driven non von-Neumann architecture and has much higher efficiency than the conventional computer architecture. Details of the FPGA co-processor design are introduced and the design is benchmarked using two different examples. The advanced chip architecture helps the FPGA co-processor obtaining more than 20 times speed up with its working frequency much lower than the CPU frequency. (authors)

  9. An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology.

    Science.gov (United States)

    Deodhar, Suruchi; Bisset, Keith R; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V

    2014-07-01

    We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity.

  10. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  11. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Mark; Brown, Jed; Shalf, John; Straalen, Brian Van; Strohmaier, Erich; Williams, Sam

    2014-05-05

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  12. Applying Machine Learning and High Performance Computing to Water Quality Assessment and Prediction

    OpenAIRE

    Ruijian Zhang; Deren Li

    2017-01-01

    Water quality assessment and prediction is a more and more important issue. Traditional ways either take lots of time or they can only do assessments. In this research, by applying machine learning algorithm to a long period time of water attributes’ data; we can generate a decision tree so that it can predict the future day’s water quality in an easy and efficient way. The idea is to combine the traditional ways and the computer algorithms together. Using machine learning algorithms, the ass...

  13. Distributed Database Access in the LHC Computing Grid with CORAL

    CERN Document Server

    Molnár, Z; Düllmann, D; Giacomo, G; Kalkhof, A; Valassi, A; CERN. Geneva. IT Department

    2009-01-01

    The CORAL package is the LCG Persistency Framework foundation for accessing relational databases. From the start CORAL has been designed to facilitate the deployment of the LHC experiment database applications in a distributed computing environment. In particular we cover - improvements to database service scalability by client connection management - platform-independent, multi-tier scalable database access by connection multiplexing, caching - a secure authentication and authorisation scheme integrated with existing grid services. We will summarize the deployment experience from several experiment productions using the distributed database infrastructure, which is now available in LCG. Finally, we present perspectives for future developments in this area.

  14. ArrayBridge: Interweaving declarative array processing with high-performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Haoyuan [The Ohio State Univ., Columbus, OH (United States); Floratos, Sofoklis [The Ohio State Univ., Columbus, OH (United States); Blanas, Spyros [The Ohio State Univ., Columbus, OH (United States); Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Prabhat, Prabhat [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Brown, Paul [Paradigm4, Inc., Waltham, MA (United States)

    2017-05-04

    Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aims to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.

  15. New Developments in Modeling MHD Systems on High Performance Computing Architectures

    Science.gov (United States)

    Germaschewski, K.; Raeder, J.; Larson, D. J.; Bhattacharjee, A.

    2009-04-01

    Modeling the wide range of time and length scales present even in fluid models of plasmas like MHD and X-MHD (Extended MHD including two fluid effects like Hall term, electron inertia, electron pressure gradient) is challenging even on state-of-the-art supercomputers. In the last years, HPC capacity has continued to grow exponentially, but at the expense of making the computer systems more and more difficult to program in order to get maximum performance. In this paper, we will present a new approach to managing the complexity caused by the need to write efficient codes: Separating the numerical description of the problem, in our case a discretized right hand side (r.h.s.), from the actual implementation of efficiently evaluating it. An automatic code generator is used to describe the r.h.s. in a quasi-symbolic form while leaving the translation into efficient and parallelized code to a computer program itself. We implemented this approach for OpenGGCM (Open General Geospace Circulation Model), a model of the Earth's magnetosphere, which was accelerated by a factor of three on regular x86 architecture and a factor of 25 on the Cell BE architecture (commonly known for its deployment in Sony's PlayStation 3).

  16. Statistical physics of fracture: scientific discovery through high-performance computing

    International Nuclear Information System (INIS)

    Kumar, Phani; Nukala, V V; Simunovic, Srdan; Mills, Richard T

    2006-01-01

    The paper presents the state-of-the-art algorithmic developments for simulating the fracture of disordered quasi-brittle materials using discrete lattice systems. Large scale simulations are often required to obtain accurate scaling laws; however, due to computational complexity, the simulations using the traditional algorithms were limited to small system sizes. We have developed two algorithms: a multiple sparse Cholesky downdating scheme for simulating 2D random fuse model systems, and a block-circulant preconditioner for simulating 2D random fuse model systems. Using these algorithms, we were able to simulate fracture of largest ever lattice system sizes (L = 1024 in 2D, and L = 64 in 3D) with extensive statistical sampling. Our recent simulations on 1024 processors of Cray-XT3 and IBM Blue-Gene/L have further enabled us to explore fracture of 3D lattice systems of size L = 200, which is a significant computational achievement. These largest ever numerical simulations have enhanced our understanding of physics of fracture; in particular, we analyze damage localization and its deviation from percolation behavior, scaling laws for damage density, universality of fracture strength distribution, size effect on the mean fracture strength, and finally the scaling of crack surface roughness

  17. ABINIT: Plane-Wave-Based Density-Functional Theory on High Performance Computers

    Science.gov (United States)

    Torrent, Marc

    2014-03-01

    For several years, a continuous effort has been produced to adapt electronic structure codes based on Density-Functional Theory to the future computing architectures. Among these codes, ABINIT is based on a plane-wave description of the wave functions which allows to treat systems of any kind. Porting such a code on petascale architectures pose difficulties related to the many-body nature of the DFT equations. To improve the performances of ABINIT - especially for what concerns standard LDA/GGA ground-state and response-function calculations - several strategies have been followed: A full multi-level parallelisation MPI scheme has been implemented, exploiting all possible levels and distributing both computation and memory. It allows to increase the number of distributed processes and could not be achieved without a strong restructuring of the code. The core algorithm used to solve the eigen problem (``Locally Optimal Blocked Congugate Gradient''), a Blocked-Davidson-like algorithm, is based on a distribution of processes combining plane-waves and bands. In addition to the distributed memory parallelization, a full hybrid scheme has been implemented, using standard shared-memory directives (openMP/openACC) or porting some comsuming code sections to Graphics Processing Units (GPU). As no simple performance model exists, the complexity of use has been increased; the code efficiency strongly depends on the distribution of processes among the numerous levels. ABINIT is able to predict the performances of several process distributions and automatically choose the most favourable one. On the other hand, a big effort has been carried out to analyse the performances of the code on petascale architectures, showing which sections of codes have to be improved; they all are related to Matrix Algebra (diagonalisation, orthogonalisation). The different strategies employed to improve the code scalability will be described. They are based on an exploration of new diagonalization

  18. Cloud computing for energy management in smart grid - an application survey

    International Nuclear Information System (INIS)

    Naveen, P; Ing, Wong Kiing; Danquah, Michael Kobina; Sidhu, Amandeep S; Abu-Siada, Ahmed

    2016-01-01

    The smart grid is the emerging energy system wherein the application of information technology, tools and techniques that make the grid run more efficiently. It possesses demand response capacity to help balance electrical consumption with supply. The challenges and opportunities of emerging and future smart grids can be addressed by cloud computing. To focus on these requirements, we provide an in-depth survey on different cloud computing applications for energy management in the smart grid architecture. In this survey, we present an outline of the current state of research on smart grid development. We also propose a model of cloud based economic power dispatch for smart grid. (paper)

  19. Exploiting Data Intensive Applications on High Performance Computers to Unlock Australia's Landsat Archive

    Science.gov (United States)

    Purss, Matthew; Lewis, Adam; Edberg, Roger; Ip, Alex; Sixsmith, Joshua; Frankish, Glenn; Chan, Tai; Evans, Ben; Hurst, Lachlan

    2013-04-01

    Australia's Earth Observation Program has downlinked and archived satellite data acquired under the NASA Landsat mission for the Australian Government since the establishment of the Australian Landsat Station in 1979. Geoscience Australia maintains this archive and produces image products to aid the delivery of government policy objectives. Due to the labor intensive nature of processing of this data there have been few national-scale datasets created to date. To compile any Earth Observation product the historical approach has been to select the required subset of data and process "scene by scene" on an as-needed basis. As data volumes have increased over time, and the demand for the processed data has also grown, it has become increasingly difficult to rapidly produce these products and achieve satisfactory policy outcomes using these historic processing methods. The result is that we have been "drowning in a sea of uncalibrated data" and scientists, policy makers and the public have not been able to realize the full potential of the Australian Landsat Archive and its value is therefore significantly diminished. To overcome this critical issue, the Australian Space Research Program has funded the "Unlocking the Landsat Archive" (ULA) Project from April 2011 to June 2013 to improve the access and utilization of Australia's archive of Landsat data. The ULA Project is a public-private consortium led by Lockheed Martin Australia (LMA) and involving Geoscience Australia (GA), the Victorian Partnership for Advanced Computing (VPAC), the National Computational Infrastructure (NCI) at the Australian National University (ANU) and the Cooperative Research Centre for Spatial Information (CRC-SI). The outputs from the ULA project will become a fundamental component of Australia's eResearch infrastructure, with the Australian Landsat Archive hosted on the NCI and made openly available under a creative commons license. NCI provides access to researchers through significant HPC

  20. SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations

    Science.gov (United States)

    Choi, Shinhyun; Tan, Scott H.; Li, Zefan; Kim, Yunjo; Choi, Chanyeol; Chen, Pai-Yu; Yeon, Hanwool; Yu, Shimeng; Kim, Jeehwan

    2018-01-01

    Although several types of architecture combining memory cells and transistors have been used to demonstrate artificial synaptic arrays, they usually present limited scalability and high power consumption. Transistor-free analog switching devices may overcome these limitations, yet the typical switching process they rely on—formation of filaments in an amorphous medium—is not easily controlled and hence hampers the spatial and temporal reproducibility of the performance. Here, we demonstrate analog resistive switching devices that possess desired characteristics for neuromorphic computing networks with minimal performance variations using a single-crystalline SiGe layer epitaxially grown on Si as a switching medium. Such epitaxial random access memories utilize threading dislocations in SiGe to confine metal filaments in a defined, one-dimensional channel. This confinement results in drastically enhanced switching uniformity and long retention/high endurance with a high analog on/off ratio. Simulations using the MNIST handwritten recognition data set prove that epitaxial random access memories can operate with an online learning accuracy of 95.1%.

  1. A high performance, low power computational platform for complex sensing operations in smart cities

    KAUST Repository

    Jiang, Jiming; Claudel, Christian

    2017-01-01

    This paper presents a new wireless platform designed for an integrated traffic/flash flood monitoring system. The sensor platform is built around a 32-bit ARM Cortex M4 microcontroller and a 2.4GHz 802.15.4802.15.4 ISM compliant radio module. It can be interfaced with fixed traffic sensors, or receive data from vehicle transponders. This platform is specifically designed for solar-powered, low bandwidth, high computational performance wireless sensor network applications. A self-recovering unit is designed to increase reliability and allow periodic hard resets, an essential requirement for sensor networks. A radio monitoring circuitry is proposed to monitor incoming and outgoing transmissions, simplifying software debugging. We illustrate the performance of this wireless sensor platform on complex problems arising in smart cities, such as traffic flow monitoring, machine-learning-based flash flood monitoring or Kalman-filter based vehicle trajectory estimation. All design files have been uploaded and shared in an open science framework, and can be accessed from [1]. The hardware design is under CERN Open Hardware License v1.2.

  2. A high performance, low power computational platform for complex sensing operations in smart cities

    KAUST Repository

    Jiang, Jiming

    2017-02-02

    This paper presents a new wireless platform designed for an integrated traffic/flash flood monitoring system. The sensor platform is built around a 32-bit ARM Cortex M4 microcontroller and a 2.4GHz 802.15.4802.15.4 ISM compliant radio module. It can be interfaced with fixed traffic sensors, or receive data from vehicle transponders. This platform is specifically designed for solar-powered, low bandwidth, high computational performance wireless sensor network applications. A self-recovering unit is designed to increase reliability and allow periodic hard resets, an essential requirement for sensor networks. A radio monitoring circuitry is proposed to monitor incoming and outgoing transmissions, simplifying software debugging. We illustrate the performance of this wireless sensor platform on complex problems arising in smart cities, such as traffic flow monitoring, machine-learning-based flash flood monitoring or Kalman-filter based vehicle trajectory estimation. All design files have been uploaded and shared in an open science framework, and can be accessed from [1]. The hardware design is under CERN Open Hardware License v1.2.

  3. Coordinated Fault-Tolerance for High-Performance Computing Final Project Report

    Energy Technology Data Exchange (ETDEWEB)

    Panda, Dhabaleswar Kumar [The Ohio State University; Beckman, Pete

    2011-07-28

    existing publish-subscribe tools. We enhanced the intrinsic fault tolerance capabilities representative implementations of a variety of key HPC software subsystems and integrated them with the FTB. Targeting software subsystems included: MPI communication libraries, checkpoint/restart libraries, resource managers and job schedulers, and system monitoring tools. Leveraging the aforementioned infrastructure, as well as developing and utilizing additional tools, we have examined issues associated with expanded, end-to-end fault response from both system and application viewpoints. From the standpoint of system operations, we have investigated log and root cause analysis, anomaly detection and fault prediction, and generalized notification mechanisms. Our applications work has included libraries for fault-tolerance linear algebra, application frameworks for coupled multiphysics applications, and external frameworks to support the monitoring and response for general applications. Our final goal was to engage the high-end computing community to increase awareness of tools and issues around coordinated end-to-end fault management.

  4. Development of three-dimensional neoclassical transport simulation code with high performance Fortran on a vector-parallel computer

    International Nuclear Information System (INIS)

    Satake, Shinsuke; Okamoto, Masao; Nakajima, Noriyoshi; Takamaru, Hisanori

    2005-11-01

    A neoclassical transport simulation code (FORTEC-3D) applicable to three-dimensional configurations has been developed using High Performance Fortran (HPF). Adoption of computing techniques for parallelization and a hybrid simulation model to the δf Monte-Carlo method transport simulation, including non-local transport effects in three-dimensional configurations, makes it possible to simulate the dynamism of global, non-local transport phenomena with a self-consistent radial electric field within a reasonable computation time. In this paper, development of the transport code using HPF is reported. Optimization techniques in order to achieve both high vectorization and parallelization efficiency, adoption of a parallel random number generator, and also benchmark results, are shown. (author)

  5. The GLOBE-Consortium: The Erasmus Computing Grid – Building a Super-Computer at Erasmus MC for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    2005-01-01

    textabstractTo meet the enormous computational needs of live-science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing grids in the world – The Erasmus Computing Grid.

  6. An Offload NIC for NASA, NLR, and Grid Computing

    Science.gov (United States)

    Awrach, James

    2013-01-01

    This work addresses distributed data management and access dynamically configurable high-speed access to data distributed and shared over wide-area high-speed network environments. An offload engine NIC (network interface card) is proposed that scales at nX10-Gbps increments through 100-Gbps full duplex. The Globus de facto standard was used in projects requiring secure, robust, high-speed bulk data transport. Novel extension mechanisms were derived that will combine these technologies for use by GridFTP, bandwidth management resources, and host CPU (central processing unit) acceleration. The result will be wire-rate encrypted Globus grid data transactions through offload for splintering, encryption, and compression. As the need for greater network bandwidth increases, there is an inherent need for faster CPUs. The best way to accelerate CPUs is through a network acceleration engine. Grid computing data transfers for the Globus tool set did not have wire-rate encryption or compression. Existing technology cannot keep pace with the greater bandwidths of backplane and network connections. Present offload engines with ports to Ethernet are 32 to 40 Gbps f-d at best. The best of ultra-high-speed offload engines use expensive ASICs (application specific integrated circuits) or NPUs (network processing units). The present state of the art also includes bonding and the use of multiple NICs that are also in the planning stages for future portability to ASICs and software to accommodate data rates at 100 Gbps. The remaining industry solutions are for carrier-grade equipment manufacturers, with costly line cards having multiples of 10-Gbps ports, or 100-Gbps ports such as CFP modules that interface to costly ASICs and related circuitry. All of the existing solutions vary in configuration based on requirements of the host, motherboard, or carriergrade equipment. The purpose of the innovation is to eliminate data bottlenecks within cluster, grid, and cloud computing systems

  7. The NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform to Support the Analysis of Petascale Environmental Data Collections

    Science.gov (United States)

    Evans, B. J. K.; Pugh, T.; Wyborn, L. A.; Porter, D.; Allen, C.; Smillie, J.; Antony, J.; Trenham, C.; Evans, B. J.; Beckett, D.; Erwin, T.; King, E.; Hodge, J.; Woodcock, R.; Fraser, R.; Lescinsky, D. T.

    2014-12-01

    The National Computational Infrastructure (NCI) has co-located a priority set of national data assets within a HPC research platform. This powerful in-situ computational platform has been created to help serve and analyse the massive amounts of data across the spectrum of environmental collections - in particular the climate, observational data and geoscientific domains. This paper examines the infrastructure, innovation and opportunity for this significant research platform. NCI currently manages nationally significant data collections (10+ PB) categorised as 1) earth system sciences, climate and weather model data assets and products, 2) earth and marine observations and products, 3) geosciences, 4) terrestrial ecosystem, 5) water management and hydrology, and 6) astronomy, social science and biosciences. The data is largely sourced from the NCI partners (who include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. By co-locating these large valuable data assets, new opportunities have arisen by harmonising the data collections, making a powerful transdisciplinary research platformThe data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. New scientific software, cloud-scale techniques, server-side visualisation and data services have been harnessed and integrated into the platform, so that analysis is performed seamlessly across the traditional boundaries of the underlying data domains. Characterisation of the techniques along with performance profiling ensures scalability of each software component, all of which can either be enhanced or replaced through future improvements. A Development-to-Operations (DevOps) framework has also been implemented to manage the scale of the software complexity alone. This ensures that

  8. CERN database services for the LHC computing grid

    Energy Technology Data Exchange (ETDEWEB)

    Girone, M [CERN IT Department, CH-1211 Geneva 23 (Switzerland)], E-mail: maria.girone@cern.ch

    2008-07-15

    Physics meta-data stored in relational databases play a crucial role in the Large Hadron Collider (LHC) experiments and also in the operation of the Worldwide LHC Computing Grid (WLCG) services. A large proportion of non-event data such as detector conditions, calibration, geometry and production bookkeeping relies heavily on databases. Also, the core Grid services that catalogue and distribute LHC data cannot operate without a reliable database infrastructure at CERN and elsewhere. The Physics Services and Support group at CERN provides database services for the physics community. With an installed base of several TB-sized database clusters, the service is designed to accommodate growth for data processing generated by the LHC experiments and LCG services. During the last year, the physics database services went through a major preparation phase for LHC start-up and are now fully based on Oracle clusters on Intel/Linux. Over 100 database server nodes are deployed today in some 15 clusters serving almost 2 million database sessions per week. This paper will detail the architecture currently deployed in production and the results achieved in the areas of high availability, consolidation and scalability. Service evolution plans for the LHC start-up will also be discussed.

  9. CERN database services for the LHC computing grid

    International Nuclear Information System (INIS)

    Girone, M

    2008-01-01

    Physics meta-data stored in relational databases play a crucial role in the Large Hadron Collider (LHC) experiments and also in the operation of the Worldwide LHC Computing Grid (WLCG) services. A large proportion of non-event data such as detector conditions, calibration, geometry and production bookkeeping relies heavily on databases. Also, the core Grid services that catalogue and distribute LHC data cannot operate without a reliable database infrastructure at CERN and elsewhere. The Physics Services and Support group at CERN provides database services for the physics community. With an installed base of several TB-sized database clusters, the service is designed to accommodate growth for data processing generated by the LHC experiments and LCG services. During the last year, the physics database services went through a major preparation phase for LHC start-up and are now fully based on Oracle clusters on Intel/Linux. Over 100 database server nodes are deployed today in some 15 clusters serving almost 2 million database sessions per week. This paper will detail the architecture currently deployed in production and the results achieved in the areas of high availability, consolidation and scalability. Service evolution plans for the LHC start-up will also be discussed

  10. Using the Eclipse Parallel Tools Platform to Assist Earth Science Model Development and Optimization on High Performance Computers

    Science.gov (United States)

    Alameda, J. C.

    2011-12-01

    Development and optimization of computational science models, particularly on high performance computers, and with the advent of ubiquitous multicore processor systems, practically on every system, has been accomplished with basic software tools, typically, command-line based compilers, debuggers, performance tools that have not changed substantially from the days of serial and early vector computers. However, model complexity, including the complexity added by modern message passing libraries such as MPI, and the need for hybrid code models (such as openMP and MPI) to be able to take full advantage of high performance computers with an increasing core count per shared memory node, has made development and optimization of such codes an increasingly arduous task. Additional architectural developments, such as many-core processors, only complicate the situation further. In this paper, we describe how our NSF-funded project, "SI2-SSI: A Productive and Accessible Development Workbench for HPC Applications Using the Eclipse Parallel Tools Platform" (WHPC) seeks to improve the Eclipse Parallel Tools Platform, an environment designed to support scientific code development targeted at a diverse set of high performance computing systems. Our WHPC project to improve Eclipse PTP takes an application-centric view to improve PTP. We are using a set of scientific applications, each with a variety of challenges, and using PTP to drive further improvements to both the scientific application, as well as to understand shortcomings in Eclipse PTP from an application developer perspective, to drive our list of improvements we seek to make. We are also partnering with performance tool providers, to drive higher quality performance tool integration. We have partnered with the Cactus group at Louisiana State University to improve Eclipse's ability to work with computational frameworks and extremely complex build systems, as well as to develop educational materials to incorporate into

  11. LHCb: The Evolution of the LHCb Grid Computing Model

    CERN Multimedia

    Arrabito, L; Bouvet, D; Cattaneo, M; Charpentier, P; Clarke, P; Closier, J; Franchini, P; Graciani, R; Lanciotti, E; Mendez, V; Perazzini, S; Nandkumar, R; Remenska, D; Roiser, S; Romanovskiy, V; Santinelli, R; Stagni, F; Tsaregorodtsev, A; Ubeda Garcia, M; Vedaee, A; Zhelezov, A

    2012-01-01

    The increase of luminosity in the LHC during its second year of operation (2011) was achieved by delivering more protons per bunch and increasing the number of bunches. Taking advantage of these changed conditions, LHCb ran with a higher pileup as well as a much larger charm physics introducing a bigger event size and processing times. These changes led to shortages in the offline distributed data processing resources, an increased need of cpu capacity by a factor 2 for reconstruction, higher storage needs at T1 sites by 70\\% and subsequently problems with data throughput for file access from the storage elements. To accommodate these changes the online running conditions and the Computing Model for offline data processing had to be adapted accordingly. This paper describes the changes implemented for the offline data processing on the Grid, relaxing the Monarc model in a first step and going beyond it subsequently. It further describes other operational issues discovered and solved during 2011, present the ...

  12. Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid

    CERN Document Server

    Andrade, Pedro; Bhatt, Kislay; Chand, Phool; Collados, David; Duggal, Vibhuti; Fuente, Paloma; Hayashi, Soichi; Imamagic, Emir; Joshi, Pradyumna; Kalmady, Rajesh; Karnani, Urvashi; Kumar, Vaibhav; Lapka, Wojciech; Quick, Robert; Tarragon, Jacobo; Teige, Scott; Triantafyllidis, Christos

    2012-01-01

    The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO managers, service managers, management), from different middleware providers (ARC, dCache, gLite, UNICORE and VDT), consortiums (WLCG, EMI, EGI, OSG), and operational teams (GOC, OMB, OTAG, CSIRT). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG portal where it is exposed to other clients. This monitoring workflow profits from the i...

  13. Engineering of an Extreme Rainfall Detection System using Grid Computing

    Directory of Open Access Journals (Sweden)

    Olivier Terzo

    2012-10-01

    Full Text Available This paper describes a new approach for intensive rainfall data analysis. ITHACA's Extreme Rainfall Detection System (ERDS is conceived to provide near real-time alerts related to potential exceptional rainfalls worldwide, which can be used by WFP or other humanitarian assistance organizations to evaluate the event and understand the potentially floodable areas where their assistance is needed. This system is based on precipitation analysis and it uses rainfall data from satellite at worldwide extent. This project uses the Tropical Rainfall Measuring Mission Multisatellite Precipitation Analysis dataset, a NASA-delivered near real-time product for current rainfall condition monitoring over the world. Considering the great deal of data to process, this paper presents an architectural solution based on Grid Computing techniques. Our focus is on the advantages of using a distributed architecture in terms of performances for this specific purpose.

  14. Development of a Computational Steering Framework for High Performance Computing Environments on Blue Gene/P Systems

    KAUST Repository

    Danani, Bob K.

    2012-01-01

    of simulation results in a post-processing step is now transformed into a real-time interactive workflow that significantly reduces development and testing time. Computational steering provides the capability to direct or re-direct the progress of a simulation

  15. Photons, photosynthesis, and high-performance computing: challenges, progress, and promise of modeling metabolism in green algae

    International Nuclear Information System (INIS)

    Chang, C H; Graf, P; Alber, D M; Kim, K; Murray, G; Posewitz, M; Seibert, M

    2008-01-01

    The complexity associated with biological metabolism considered at a kinetic level presents a challenge to quantitative modeling. In particular, the relatively sparse knowledge of parameters for enzymes with known kinetic responses is problematic. The possible space of these parameters is of high-dimension, and sampling of such a space typifies a combinatorial explosion of possible dynamic states. However, with sufficient quantitative transcriptomics, proteomics, and metabolomics data at hand, these challenges could be met by high-performance software with sampling, fitting, and optimization capabilities. With this in mind, we present the High-Performance Systems Biology Toolkit HiPer SBTK, an evolving software package to simulate, fit, and optimize metabolite concentrations and fluxes within the space of rate and binding parameters associated with detailed enzyme kinetic models. We present our chosen modeling paradigm for the formulation of metabolic pathway models, the means to address the challenge of representing such models in a precise and persistent fashion using the standardized Systems Biology Markup Language, and our second-generation model of H2-associated Chlamydomonas metabolism. Processing of such models for hierarchically parallelized simulation and optimization, job specification by the user through a GUI interface, software capabilities and initial scaling data, and the mapping of the computation to biological questions is also discussed. Moreover, we present near-term future software and model development goals

  16. Harnessing the Department of Energy’s High-Performance Computing Expertise to Strengthen the U.S. Chemical Enterprise

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, David A.; Dupuis, Michel; Garrett, Bruce C.; Neaton, Jeffrey B.; Plata, Charity; Tarr, Matthew A.; Tomb, Jean-Francois; Golab, Joseph T.

    2012-01-17

    High-performance computing (HPC) is one area where the DOE has developed extensive expertise and capability. However, this expertise currently is not properly shared with or used by the private sector to speed product development, enable industry to move rapidly into new areas, and improve product quality. Such use would lead to substantial competitive advantages in global markets and yield important economic returns for the United States. To stimulate the dissemination of DOE's HPC expertise, the Council for Chemical Research (CCR) and the DOE jointly held a workshop on this topic. Four important energy topic areas were chosen as the focus of the meeting: Biomass/Bioenergy, Catalytic Materials, Energy Storage, and Photovoltaics. Academic, industrial, and government experts in these topic areas participated in the workshop to identify industry needs, evaluate the current state of expertise, offer proposed actions and strategies, and forecast the expected benefits of implementing those strategies.

  17. Physical modeling and high-performance GPU computing for characterization, interception, and disruption of hazardous near-Earth objects

    Science.gov (United States)

    Kaplinger, Brian Douglas

    For the past few decades, both the scientific community and the general public have been becoming more aware that the Earth lives in a shooting gallery of small objects. We classify all of these asteroids and comets, known or unknown, that cross Earth's orbit as near-Earth objects (NEOs). A look at our geologic history tells us that NEOs have collided with Earth in the past, and we expect that they will continue to do so. With thousands of known NEOs crossing the orbit of Earth, there has been significant scientific interest in developing the capability to deflect an NEO from an impacting trajectory. This thesis applies the ideas of Smoothed Particle Hydrodynamics (SPH) theory to the NEO disruption problem. A simulation package was designed that allows efficacy simulation to be integrated into the mission planning and design process. This is done by applying ideas in high-performance computing (HPC) on the computer graphics processing unit (GPU). Rather than prove a concept through large standalone simulations on a supercomputer, a highly parallel structure allows for flexible, target dependent questions to be resolved. Built around nonclassified data and analysis, this computer package will allow academic institutions to better tackle the issue of NEO mitigation effectiveness.

  18. SCinet Architecture: Featured at the International Conference for High Performance Computing,Networking, Storage and Analysis 2016

    Energy Technology Data Exchange (ETDEWEB)

    Lyonnais, Marc; Smith, Matt; Mace, Kate P.

    2017-02-06

    SCinet is the purpose-built network that operates during the International Conference for High Performance Computing,Networking, Storage and Analysis (Super Computing or SC). Created each year for the conference, SCinet brings to life a high-capacity network that supports applications and experiments that are a hallmark of the SC conference. The network links the convention center to research and commercial networks around the world. This resource serves as a platform for exhibitors to demonstrate the advanced computing resources of their home institutions and elsewhere by supporting a wide variety of applications. Volunteers from academia, government and industry work together to design and deliver the SCinet infrastructure. Industry vendors and carriers donate millions of dollars in equipment and services needed to build and support the local and wide area networks. Planning begins more than a year in advance of each SC conference and culminates in a high intensity installation in the days leading up to the conference. The SCinet architecture for SC16 illustrates a dramatic increase in participation from the vendor community, particularly those that focus on network equipment. Software-Defined Networking (SDN) and Data Center Networking (DCN) are present in nearly all aspects of the design.

  19. Advances in Grid Computing for the Fabric for Frontier Experiments Project at Fermilab

    Science.gov (United States)

    Herner, K.; Alba Hernandez, A. F.; Bhat, S.; Box, D.; Boyd, J.; Di Benedetto, V.; Ding, P.; Dykstra, D.; Fattoruso, M.; Garzoglio, G.; Kirby, M.; Kreymer, A.; Levshina, T.; Mazzacane, A.; Mengel, M.; Mhashilkar, P.; Podstavkov, V.; Retzke, K.; Sharma, N.; Teheran, J.

    2017-10-01

    The Fabric for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientific Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of differing size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certificate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have significantly matured, and present an increasingly complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the efforts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production workflows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular workflows, and support troubleshooting and triage in case of problems. Recently a new certificate management infrastructure called

  20. Advances in Grid Computing for the FabrIc for Frontier Experiments Project at Fermialb

    Energy Technology Data Exchange (ETDEWEB)

    Herner, K. [Fermilab; Alba Hernandex, A. F. [Fermilab; Bhat, S. [Fermilab; Box, D. [Fermilab; Boyd, J. [Fermilab; Di Benedetto, V. [Fermilab; Ding, P. [Fermilab; Dykstra, D. [Fermilab; Fattoruso, M. [Fermilab; Garzoglio, G. [Fermilab; Kirby, M. [Fermilab; Kreymer, A. [Fermilab; Levshina, T. [Fermilab; Mazzacane, A. [Fermilab; Mengel, M. [Fermilab; Mhashilkar, P. [Fermilab; Podstavkov, V. [Fermilab; Retzke, K. [Fermilab; Sharma, N. [Fermilab; Teheran, J. [Fermilab

    2016-01-01

    The FabrIc for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientic Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of diering size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certicate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have signicantly matured, and present an increasingly complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the eorts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production work ows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular work ows, and support troubleshooting and triage in case of problems. Recently a new certicate management infrastructure called Distributed

  1. Parallel Computational Fluid Dynamics 2007 : Implementations and Experiences on Large Scale and Grid Computing

    CERN Document Server

    2009-01-01

    At the 19th Annual Conference on Parallel Computational Fluid Dynamics held in Antalya, Turkey, in May 2007, the most recent developments and implementations of large-scale and grid computing were presented. This book, comprised of the invited and selected papers of this conference, details those advances, which are of particular interest to CFD and CFD-related communities. It also offers the results related to applications of various scientific and engineering problems involving flows and flow-related topics. Intended for CFD researchers and graduate students, this book is a state-of-the-art presentation of the relevant methodology and implementation techniques of large-scale computing.

  2. Multidimensional Environmental Data Resource Brokering on Computational Grids and Scientific Clouds

    Science.gov (United States)

    Montella, Raffaele; Giunta, Giulio; Laccetti, Giuliano

    Grid computing has widely evolved over the past years, and its capabilities have found their way even into business products and are no longer relegated to scientific applications. Today, grid computing technology is not restricted to a set of specific grid open source or industrial products, but rather it is comprised of a set of capabilities virtually within any kind of software to create shared and highly collaborative production environments. These environments are focused on computational (workload) capabilities and the integration of information (data) into those computational capabilities. An active grid computing application field is the fully virtualization of scientific instruments in order to increase their availability and decrease operational and maintaining costs. Computational and information grids allow to manage real-world objects in a service-oriented way using industrial world-spread standards.

  3. Parallel Monte Carlo simulations on an ARC-enabled computing grid

    International Nuclear Information System (INIS)

    Nilsen, Jon K; Samset, Bjørn H

    2011-01-01

    Grid computing opens new possibilities for running heavy Monte Carlo simulations of physical systems in parallel. The presentation gives an overview of GaMPI, a system for running an MPI-based random walker simulation on grid resources. Integrating the ARC middleware and the new storage system Chelonia with the Ganga grid job submission and control system, we show that MPI jobs can be run on a world-wide computing grid with good performance and promising scaling properties. Results for relatively communication-heavy Monte Carlo simulations run on multiple heterogeneous, ARC-enabled computing clusters in several countries are presented.

  4. High performance computing of density matrix renormalization group method for 2-dimensional model. Parallelization strategy toward peta computing

    International Nuclear Information System (INIS)

    Yamada, Susumu; Igarashi, Ryo; Machida, Masahiko; Imamura, Toshiyuki; Okumura, Masahiko; Onishi, Hiroaki

    2010-01-01

    We parallelize the density matrix renormalization group (DMRG) method, which is a ground-state solver for one-dimensional quantum lattice systems. The parallelization allows us to extend the applicable range of the DMRG to n-leg ladders i.e., quasi two-dimension cases. Such an extension is regarded to bring about several breakthroughs in e.g., quantum-physics, chemistry, and nano-engineering. However, the straightforward parallelization requires all-to-all communications between all processes which are unsuitable for multi-core systems, which is a mainstream of current parallel computers. Therefore, we optimize the all-to-all communications by the following two steps. The first one is the elimination of the communications between all processes by only rearranging data distribution with the communication data amount kept. The second one is the avoidance of the communication conflict by rescheduling the calculation and the communication. We evaluate the performance of the DMRG method on multi-core supercomputers and confirm that our two-steps tuning is quite effective. (author)

  5. Kids at CERN Grids for Kids programme leads to advanced computing knowledge.

    CERN Multimedia

    2008-01-01

    Children as young as 10 are learning computing skills, such as middleware, parallel processing and supercomputing, at CERN, the European Organisation for Nuclear Research, last week. The initiative for 10 to 12 years olds is part of the Grids for Kids programme, which aims to introduce Grid computing as a tool for research.

  6. Proceedings of the High Performance Embedded Computing Workshop (HPEC 2006) (10th). Held in Lexington, Massachusetts on September 19-21, 2006 (CD-ROM)

    National Research Council Canada - National Science Library

    Kepner, Jeremy

    2007-01-01

    ...: 1 CD-ROM; 4 3/4 in.; 78.3 MB. ABSTRACT: The High-Performance Embedded Computing (HPEC) technical committee announced the tenth annual HPEC Workshop held in September 2006 at MIT Lincoln Laboratory in Lexington, MA...

  7. Grid computing in pakistan and: opening to large hadron collider experiments

    International Nuclear Information System (INIS)

    Batool, N.; Osman, A.; Mahmood, A.; Rana, M.A.

    2009-01-01

    A grid computing facility was developed at sister institutes Pakistan Institute of Nuclear Science and Technology (PINSTECH) and Pakistan Institute of Engineering and Applied Sciences (PIEAS) in collaboration with Large Hadron Collider (LHC) Computing Grid during early years of the present decade. The Grid facility PAKGRID-LCG2 as one of the grid node in Pakistan was developed employing mainly local means and is capable of supporting local and international research and computational tasks in the domain of LHC Computing Grid. Functional status of the facility is presented in terms of number of jobs performed. The facility developed provides a forum to local researchers in the field of high energy physics to participate in the LHC experiments and related activities at European particle physics research laboratory (CERN), which is one of the best physics laboratories in the world. It also provides a platform of an emerging computing technology (CT). (author)

  8. Exploring Subpixel Learning Algorithms for Estimating Global Land Cover Fractions from Satellite Data Using High Performance Computing

    Directory of Open Access Journals (Sweden)

    Uttam Kumar

    2017-10-01

    Full Text Available Land cover (LC refers to the physical and biological cover present over the Earth’s surface in terms of the natural environment such as vegetation, water, bare soil, etc. Most LC features occur at finer spatial scales compared to the resolution of primary remote sensing satellites. Therefore, observed data are a mixture of spectral signatures of two or more LC features resulting in mixed pixels. One solution to the mixed pixel problem is the use of subpixel learning algorithms to disintegrate the pixel spectrum into its constituent spectra. Despite the popularity and existing research conducted on the topic, the most appropriate approach is still under debate. As an attempt to address this question, we compared the performance of several subpixel learning algorithms based on least squares, sparse regression, signal–subspace and geometrical methods. Analysis of the results obtained through computer-simulated and Landsat data indicated that fully constrained least squares (FCLS outperformed the other techniques. Further, FCLS was used to unmix global Web-Enabled Landsat Data to obtain abundances of substrate (S, vegetation (V and dark object (D classes. Due to the sheer nature of data and computational needs, we leveraged the NASA Earth Exchange (NEX high-performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into four classes, namely forest, farmland, water and urban areas (in conjunction with nighttime lights data over California, USA using a random forest classifier. Validation of these LC maps with the National Land Cover Database 2011 products and North American Forest Dynamics static forest map shows a 6% improvement in unmixing-based classification relative to per-pixel classification. As such, abundance maps continue to offer a useful alternative to high-spatial-resolution classified maps for forest inventory analysis, multi

  9. CDF GlideinWMS usage in Grid computing of high energy physics

    International Nuclear Information System (INIS)

    Zvada, Marian; Sfiligoi, Igor; Benjamin, Doug

    2010-01-01

    Many members of large science collaborations already have specialized grids available to advance their research in the need of getting more computing resources for data analysis. This has forced the Collider Detector at Fermilab (CDF) collaboration to move beyond the usage of dedicated resources and start exploiting Grid resources. Nowadays, CDF experiment is increasingly relying on glidein-based computing pools for data reconstruction. Especially, Monte Carlo production and user data analysis, serving over 400 users by central analysis farm middleware (CAF) on the top of Condor batch system and CDF Grid infrastructure. Condor is designed as distributed architecture and its glidein mechanism of pilot jobs is ideal for abstracting the Grid computing by making a virtual private computing pool. We would like to present the first production use of the generic pilot-based Workload Management System (glideinWMS), which is an implementation of the pilot mechanism based on the Condor distributed infrastructure. CDF Grid computing uses glideinWMS for its data reconstruction on the FNAL campus Grid, user analysis and Monte Carlo production across Open Science Grid (OSG). We review this computing model and setup used including CDF specific configuration within the glideinWMS system which provides powerful scalability and makes Grid computing working like in a local batch environment with ability to handle more than 10000 running jobs at a time.

  10. Sort-Mid tasks scheduling algorithm in grid computing

    Directory of Open Access Journals (Sweden)

    Naglaa M. Reda

    2015-11-01

    Full Text Available Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

  11. Sort-Mid tasks scheduling algorithm in grid computing.

    Science.gov (United States)

    Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M

    2015-11-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

  12. Campus Grids: Bringing Additional Computational Resources to HEP Researchers

    International Nuclear Information System (INIS)

    Weitzel, Derek; Fraser, Dan; Bockelman, Brian; Swanson, David

    2012-01-01

    It is common at research institutions to maintain multiple clusters that represent different owners or generations of hardware, or that fulfill different needs and policies. Many of these clusters are consistently under utilized while researchers on campus could greatly benefit from these unused capabilities. By leveraging principles from the Open Science Grid it is now possible to utilize these resources by forming a lightweight campus grid. The campus grids framework enables jobs that are submitted to one cluster to overflow, when necessary, to other clusters within the campus using whatever authentication mechanisms are available on campus. This framework is currently being used on several campuses to run HEP and other science jobs. Further, the framework has in some cases been expanded beyond the campus boundary by bridging campus grids into a regional grid, and can even be used to integrate resources from a national cyberinfrastructure such as the Open Science Grid. This paper will highlight 18 months of operational experiences creating campus grids in the US, and the different campus configurations that have successfully utilized the campus grid infrastructure.

  13. Porting of Scientific Applications to Grid Computing on GridWay

    Directory of Open Access Journals (Sweden)

    J. Herrera

    2005-01-01

    Full Text Available The expansion and adoption of Grid technologies is prevented by the lack of a standard programming paradigm to port existing applications among different environments. The Distributed Resource Management Application API has been proposed to aid the rapid development and distribution of these applications across different Distributed Resource Management Systems. In this paper we describe an implementation of the DRMAA standard on a Globus-based testbed, and show its suitability to express typical scientific applications, like High-Throughput and Master-Worker applications. The DRMAA routines are supported by the functionality offered by the GridWay2 framework, which provides the runtime mechanisms needed for transparently executing jobs on a dynamic Grid environment based on Globus. As cases of study, we consider the implementation with DRMAA of a bioinformatics application, a genetic algorithm and the NAS Grid Benchmarks.

  14. Complex three dimensional modelling of porous media using high performance computing and multi-scale incompressible approach

    Science.gov (United States)

    Martin, R.; Orgogozo, L.; Noiriel, C. N.; Guibert, R.; Golfier, F.; Debenest, G.; Quintard, M.

    2013-05-01

    In the context of biofilm growth in porous media, we developed high performance computing tools to study the impact of biofilms on the fluid transport through pores of a solid matrix. Indeed, biofilms are consortia of micro-organisms that are developing in polymeric extracellular substances that are generally located at a fluid-solid interfaces like pore interfaces in a water-saturated porous medium. Several applications of biofilms in porous media are encountered for instance in bio-remediation methods by allowing the dissolution of organic pollutants. Many theoretical studies have been done on the resulting effective properties of these modified media ([1],[2], [3]) but the bio-colonized porous media under consideration are mainly described following simplified theoretical media (stratified media, cubic networks of spheres ...). Therefore, recent experimental advances have provided tomography images of bio-colonized porous media which allow us to observe realistic biofilm micro-structures inside the porous media [4]. To solve closure system of equations related to upscaling procedures in realistic porous media, we solve the velocity field of fluids through pores on complex geometries that are described with a huge number of cells (up to billions). Calculations are made on a realistic 3D sample geometry obtained by X micro-tomography. Cell volumes are coming from a percolation experiment performed to estimate the impact of precipitation processes on the properties of a fluid transport phenomena in porous media [5]. Average permeabilities of the sample are obtained from velocities by using MPI-based high performance computing on up to 1000 processors. Steady state Stokes equations are solved using finite volume approach. Relaxation pre-conditioning is introduced to accelerate the code further. Good weak or strong scaling are reached with results obtained in hours instead of weeks. Factors of accelerations of 20 up to 40 can be reached. Tens of geometries can now be

  15. Computational Approach for Securing Radiology-Diagnostic Data in Connected Health Network using High-Performance GPU-Accelerated AES.

    Science.gov (United States)

    Adeshina, A M; Hashim, R

    2017-03-01

    Diagnostic radiology is a core and integral part of modern medicine, paving ways for the primary care physicians in the disease diagnoses, treatments and therapy managements. Obviously, all recent standard healthcare procedures have immensely benefitted from the contemporary information technology revolutions, apparently revolutionizing those approaches to acquiring, storing and sharing of diagnostic data for efficient and timely diagnosis of diseases. Connected health network was introduced as an alternative to the ageing traditional concept in healthcare system, improving hospital-physician connectivity and clinical collaborations. Undoubtedly, the modern medicinal approach has drastically improved healthcare but at the expense of high computational cost and possible breach of diagnosis privacy. Consequently, a number of cryptographical techniques are recently being applied to clinical applications, but the challenges of not being able to successfully encrypt both the image and the textual data persist. Furthermore, processing time of encryption-decryption of medical datasets, within a considerable lower computational cost without jeopardizing the required security strength of the encryption algorithm, still remains as an outstanding issue. This study proposes a secured radiology-diagnostic data framework for connected health network using high-performance GPU-accelerated Advanced Encryption Standard. The study was evaluated with radiology image datasets consisting of brain MR and CT datasets obtained from the department of Surgery, University of North Carolina, USA, and the Swedish National Infrastructure for Computing. Sample patients' notes from the University of North Carolina, School of medicine at Chapel Hill were also used to evaluate the framework for its strength in encrypting-decrypting textual data in the form of medical report. Significantly, the framework is not only able to accurately encrypt and decrypt medical image datasets, but it also

  16. Computational Needs for the Next Generation Electric Grid Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Birman, Kenneth; Ganesh, Lakshmi; Renessee, Robbert van; Ferris, Michael; Hofmann, Andreas; Williams, Brian; Sztipanovits, Janos; Hemingway, Graham; University, Vanderbilt; Bose, Anjan; Stivastava, Anurag; Grijalva, Santiago; Grijalva, Santiago; Ryan, Sarah M.; McCalley, James D.; Woodruff, David L.; Xiong, Jinjun; Acar, Emrah; Agrawal, Bhavna; Conn, Andrew R.; Ditlow, Gary; Feldmann, Peter; Finkler, Ulrich; Gaucher, Brian; Gupta, Anshul; Heng, Fook-Luen; Kalagnanam, Jayant R; Koc, Ali; Kung, David; Phan, Dung; Singhee, Amith; Smith, Basil

    2011-10-05

    The April 2011 DOE workshop, 'Computational Needs for the Next Generation Electric Grid', was the culmination of a year-long process to bring together some of the Nation's leading researchers and experts to identify computational challenges associated with the operation and planning of the electric power system. The attached papers provide a journey into these experts' insights, highlighting a class of mathematical and computational problems relevant for potential power systems research. While each paper defines a specific problem area, there were several recurrent themes. First, the breadth and depth of power system data has expanded tremendously over the past decade. This provides the potential for new control approaches and operator tools that can enhance system efficiencies and improve reliability. However, the large volume of data poses its own challenges, and could benefit from application of advances in computer networking and architecture, as well as data base structures. Second, the computational complexity of the underlying system problems is growing. Transmitting electricity from clean, domestic energy resources in remote regions to urban consumers, for example, requires broader, regional planning over multi-decade time horizons. Yet, it may also mean operational focus on local solutions and shorter timescales, as reactive power and system dynamics (including fast switching and controls) play an increasingly critical role in achieving stability and ultimately reliability. The expected growth in reliance on variable renewable sources of electricity generation places an exclamation point on both of these observations, and highlights the need for new focus in areas such as stochastic optimization to accommodate the increased uncertainty that is occurring in both planning and operations. Application of research advances in algorithms (especially related to optimization techniques and uncertainty quantification) could accelerate power

  17. Adoption of High Performance Computational (HPC) Modeling Software for Widespread Use in the Manufacture of Welded Structures

    Energy Technology Data Exchange (ETDEWEB)

    Brust, Frederick W. [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Punch, Edward F. [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Twombly, Elizabeth Kurth [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Kalyanam, Suresh [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Kennedy, James [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Hattery, Garty R. [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Dodds, Robert H. [Professional Consulting Services, Inc., Lisle, IL (United States); Mach, Justin C [Caterpillar, Peoria, IL (United States); Chalker, Alan [Ohio Supercomputer Center (OSC), Columbus, OH (United States); Nicklas, Jeremy [Ohio Supercomputer Center (OSC), Columbus, OH (United States); Gohar, Basil M [Ohio Supercomputer Center (OSC), Columbus, OH (United States); Hudak, David [Ohio Supercomputer Center (OSC), Columbus, OH (United States)

    2016-12-30

    This report summarizes the final product developed for the US DOE Small Business Innovation Research (SBIR) Phase II grant made to Engineering Mechanics Corporation of Columbus (Emc2) between April 16, 2014 and August 31, 2016 titled ‘Adoption of High Performance Computational (HPC) Modeling Software for Widespread Use in the Manufacture of Welded Structures’. Many US companies have moved fabrication and production facilities off shore because of cheaper labor costs. A key aspect in bringing these jobs back to the US is the use of technology to render US-made fabrications more cost-efficient overall with higher quality. One significant advantage that has emerged in the US over the last two decades is the use of virtual design for fabrication of small and large structures in weld fabrication industries. Industries that use virtual design and analysis tools have reduced material part size, developed environmentally-friendly fabrication processes, improved product quality and performance, and reduced manufacturing costs. Indeed, Caterpillar Inc. (CAT), one of the partners in this effort, continues to have a large fabrication presence in the US because of the use of weld fabrication modeling to optimize fabrications by controlling weld residual stresses and distortions and improving fatigue, corrosion, and fracture performance. This report describes Emc2’s DOE SBIR Phase II final results to extend an existing, state-of-the-art software code, Virtual Fabrication Technology (VFT®), currently used to design and model large welded structures prior to fabrication - to a broader range of products with widespread applications for small and medium-sized enterprises (SMEs). VFT® helps control distortion, can minimize and/or control residual stresses, control welding microstructure, and pre-determine welding parameters such as weld-sequencing, pre-bending, thermal-tensioning, etc. VFT® uses material properties, consumable properties, etc. as inputs

  18. Demand side management scheme in smart grid with cloud computing approach using stochastic dynamic programming

    Directory of Open Access Journals (Sweden)

    S. Sofana Reka

    2016-09-01

    Full Text Available This paper proposes a cloud computing framework in smart grid environment by creating small integrated energy hub supporting real time computing for handling huge storage of data. A stochastic programming approach model is developed with cloud computing scheme for effective demand side management (DSM in smart grid. Simulation results are obtained using GUI interface and Gurobi optimizer in Matlab in order to reduce the electricity demand by creating energy networks in a smart hub approach.

  19. Greedy and metaheuristics for the offline scheduling problem in grid computing

    DEFF Research Database (Denmark)

    Gamst, Mette

    In grid computing a number of geographically distributed resources connected through a wide area network, are utilized as one computations unit. The NP-hard offline scheduling problem in grid computing consists of assigning jobs to resources in advance. In this paper, five greedy heuristics and two....... All heuristics solve instances with up to 2000 jobs and 1000 resources, thus the results are useful both with respect to running times and to solution values....

  20. High performance systems

    Energy Technology Data Exchange (ETDEWEB)

    Vigil, M.B. [comp.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.