WorldWideScience

Sample records for accelerating scientific computations

  1. Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems

    Science.gov (United States)

    2017-04-13

    AFRL-AFOSR-UK-TR-2017-0029 Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems ...2012, “ Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems .” 2. The objective...2012 - 01/25/2015 4. TITLE AND SUBTITLE Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous

  2. Smolyak's algorithm: A powerful black box for the acceleration of scientific computations

    KAUST Repository

    Tempone, Raul; Wolfers, Soeren

    2017-01-01

    We provide a general discussion of Smolyak's algorithm for the acceleration of scientific computations. The algorithm first appeared in Smolyak's work on multidimensional integration and interpolation. Since then, it has been generalized in multiple directions and has been associated with the keywords: sparse grids, hyperbolic cross approximation, combination technique, and multilevel methods. Variants of Smolyak's algorithm have been employed in the computation of high-dimensional integrals in finance, chemistry, and physics, in the numerical solution of partial and stochastic differential equations, and in uncertainty quantification. Motivated by this broad and ever-increasing range of applications, we describe a general framework that summarizes fundamental results and assumptions in a concise application-independent manner.

  3. Smolyak's algorithm: A powerful black box for the acceleration of scientific computations

    KAUST Repository

    Tempone, Raul

    2017-03-26

    We provide a general discussion of Smolyak\\'s algorithm for the acceleration of scientific computations. The algorithm first appeared in Smolyak\\'s work on multidimensional integration and interpolation. Since then, it has been generalized in multiple directions and has been associated with the keywords: sparse grids, hyperbolic cross approximation, combination technique, and multilevel methods. Variants of Smolyak\\'s algorithm have been employed in the computation of high-dimensional integrals in finance, chemistry, and physics, in the numerical solution of partial and stochastic differential equations, and in uncertainty quantification. Motivated by this broad and ever-increasing range of applications, we describe a general framework that summarizes fundamental results and assumptions in a concise application-independent manner.

  4. XVIS: Visualization for the Extreme-Scale Scientific-Computation Ecosystem Final Scientific/Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States); Maynard, Robert [Kitware, Inc., Clifton Park, NY (United States)

    2017-10-27

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. The XVis project brought together collaborators from predominant DOE projects for visualization on accelerators and combining their respective features into a new visualization toolkit called VTK-m.

  5. Community petascale project for accelerator science and simulation: Advancing computational science for future accelerators and accelerator technologies

    International Nuclear Information System (INIS)

    Spentzouris, P.; Cary, J.; McInnes, L.C.; Mori, W.; Ng, C.; Ng, E.; Ryne, R.

    2008-01-01

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R and D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  6. Scientific Discovery through Advanced Computing in Plasma Science

    Science.gov (United States)

    Tang, William

    2005-03-01

    Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research during the 21st Century. For example, the Department of Energy's ``Scientific Discovery through Advanced Computing'' (SciDAC) Program was motivated in large measure by the fact that formidable scientific challenges in its research portfolio could best be addressed by utilizing the combination of the rapid advances in super-computing technology together with the emergence of effective new algorithms and computational methodologies. The imperative is to translate such progress into corresponding increases in the performance of the scientific codes used to model complex physical systems such as those encountered in high temperature plasma research. If properly validated against experimental measurements and analytic benchmarks, these codes can provide reliable predictive capability for the behavior of a broad range of complex natural and engineered systems. This talk reviews recent progress and future directions for advanced simulations with some illustrative examples taken from the plasma science applications area. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by the combination of access to powerful new computational resources together with innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning a huge range in time and space scales. In particular, the plasma science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations

  7. Community Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Spentzouris, P.; /Fermilab; Cary, J.; /Tech-X, Boulder; McInnes, L.C.; /Argonne; Mori, W.; /UCLA; Ng, C.; /SLAC; Ng, E.; Ryne, R.; /LBL, Berkeley

    2011-11-14

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization

  8. The impact of new computer technology on accelerator control

    International Nuclear Information System (INIS)

    Theil, E.; Jacobson, V.; Paxson, V.

    1987-01-01

    This paper describes some recent developments in computing and stresses their application to accelerator control systems. Among the advances that promise to have a significant impact are: i) low cost scientific workstations; ii) the use of ''windows'', pointing devices and menus in a multitasking operating system; iii) high resolution large-screen graphics monitors; iv) new kinds of high bandwidth local area networks. The relevant features are related to a general accelerator control system. For example, the authors examine the implications of a computing environment which permits and encourages graphical manipulation of system components, rather than traditional access through the writing of programs or ''canned'' access via touch panels

  9. The impact of new computer technology on accelerator control

    International Nuclear Information System (INIS)

    Theil, E.; Jacobson, V.; Paxson, V.

    1987-04-01

    This paper describes some recent developments in computing and stresses their application in accelerator control systems. Among the advances that promise to have a significant impact are (1) low cost scientific workstations; (2) the use of ''windows'', pointing devices and menus in a multi-tasking operating system; (3) high resolution large-screen graphics monitors; (4) new kinds of high bandwidth local area networks. The relevant features are related to a general accelerator control system. For example, this paper examines the implications of a computing environment which permits and encourages graphical manipulation of system components, rather than traditional access through the writing of programs or ''canned'' access via touch panels

  10. Data management, code deployment, and scientific visualization to enhance scientific discovery in fusion research through advanced computing

    International Nuclear Information System (INIS)

    Schissel, D.P.; Finkelstein, A.; Foster, I.T.; Fredian, T.W.; Greenwald, M.J.; Hansen, C.D.; Johnson, C.R.; Keahey, K.; Klasky, S.A.; Li, K.; McCune, D.C.; Peng, Q.; Stevens, R.; Thompson, M.R.

    2002-01-01

    The long-term vision of the Fusion Collaboratory described in this paper is to transform fusion research and accelerate scientific understanding and innovation so as to revolutionize the design of a fusion energy source. The Collaboratory will create and deploy collaborative software tools that will enable more efficient utilization of existing experimental facilities and more effective integration of experiment, theory, and modeling. The computer science research necessary to create the Collaboratory is centered on three activities: security, remote and distributed computing, and scientific visualization. It is anticipated that the presently envisioned Fusion Collaboratory software tools will require 3 years to complete

  11. Accelerating the scientific exploration process with scientific workflows

    International Nuclear Information System (INIS)

    Altintas, Ilkay; Barney, Oscar; Cheng, Zhengang; Critchlow, Terence; Ludaescher, Bertram; Parker, Steve; Shoshani, Arie; Vouk, Mladen

    2006-01-01

    Although an increasing amount of middleware has emerged in the last few years to achieve remote data access, distributed job execution, and data management, orchestrating these technologies with minimal overhead still remains a difficult task for scientists. Scientific workflow systems improve this situation by creating interfaces to a variety of technologies and automating the execution and monitoring of the workflows. Workflow systems provide domain-independent customizable interfaces and tools that combine different tools and technologies along with efficient methods for using them. As simulations and experiments move into the petascale regime, the orchestration of long running data and compute intensive tasks is becoming a major requirement for the successful steering and completion of scientific investigations. A scientific workflow is the process of combining data and processes into a configurable, structured set of steps that implement semi-automated computational solutions of a scientific problem. Kepler is a cross-project collaboration, co-founded by the SciDAC Scientific Data Management (SDM) Center, whose purpose is to develop a domain-independent scientific workflow system. It provides a workflow environment in which scientists design and execute scientific workflows by specifying the desired sequence of computational actions and the appropriate data flow, including required data transformations, between these steps. Currently deployed workflows range from local analytical pipelines to distributed, high-performance and high-throughput applications, which can be both data- and compute-intensive. The scientific workflow approach offers a number of advantages over traditional scripting-based approaches, including ease of configuration, improved reusability and maintenance of workflows and components (called actors), automated provenance management, 'smart' re-running of different versions of workflow instances, on-the-fly updateable parameters, monitoring

  12. Fast acceleration of 2D wave propagation simulations using modern computational accelerators.

    Directory of Open Access Journals (Sweden)

    Wei Wang

    Full Text Available Recent developments in modern computational accelerators like Graphics Processing Units (GPUs and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than 150x speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least 200x faster than the sequential implementation and 30x faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of 120x with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other

  13. COMPASS, the COMmunity Petascale project for Accelerator Science and Simulation, a board computational accelerator physics initiative

    International Nuclear Information System (INIS)

    Cary, J.R.; Spentzouris, P.; Amundson, J.; McInnes, L.; Borland, M.; Mustapha, B.; Ostroumov, P.; Wang, Y.; Fischer, W.; Fedotov, A.; Ben-Zvi, I.; Ryne, R.; Esarey, E.; Geddes, C.; Qiang, J.; Ng, E.; Li, S.; Ng, C.; Lee, R.; Merminga, L.; Wang, H.; Bruhwiler, D.L.; Dechow, D.; Mullowney, P.; Messmer, P.; Nieter, C.; Ovtchinnikov, S.; Paul, K.; Stoltz, P.; Wade-Stein, D.; Mori, W.B.; Decyk, V.; Huang, C.K.; Lu, W.; Tzoufras, M.; Tsung, F.; Zhou, M.; Werner, G.R.; Antonsen, T.; Katsouleas, T.; Morris, B.

    2007-01-01

    Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction

  14. CONFERENCE: Computers and accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1984-01-15

    In September of last year a Conference on 'Computers in Accelerator Design and Operation' was held in West Berlin attracting some 160 specialists including many from outside Europe. It was a Europhysics Conference, organized by the Hahn-Meitner Institute with Roman Zelazny as Conference Chairman, postponed from an earlier intended venue in Warsaw. The aim was to bring together specialists in the fields of accelerator design, computer control and accelerator operation.

  15. Neural computation and particle accelerators research, technology and applications

    CERN Document Server

    D'Arras, Horace

    2010-01-01

    This book discusses neural computation, a network or circuit of biological neurons and relatedly, particle accelerators, a scientific instrument which accelerates charged particles such as protons, electrons and deuterons. Accelerators have a very broad range of applications in many industrial fields, from high energy physics to medical isotope production. Nuclear technology is one of the fields discussed in this book. The development that has been reached by particle accelerators in energy and particle intensity has opened the possibility to a wide number of new applications in nuclear technology. This book reviews the applications in the nuclear energy field and the design features of high power neutron sources are explained. Surface treatments of niobium flat samples and superconducting radio frequency cavities by a new technique called gas cluster ion beam are also studied in detail, as well as the process of electropolishing. Furthermore, magnetic devises such as solenoids, dipoles and undulators, which ...

  16. Accelerating scientific discovery : 2007 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Beckman, P.; Dave, P.; Drugan, C.

    2008-11-14

    As a gateway for scientific discovery, the Argonne Leadership Computing Facility (ALCF) works hand in hand with the world's best computational scientists to advance research in a diverse span of scientific domains, ranging from chemistry, applied mathematics, and materials science to engineering physics and life sciences. Sponsored by the U.S. Department of Energy's (DOE) Office of Science, researchers are using the IBM Blue Gene/L supercomputer at the ALCF to study and explore key scientific problems that underlie important challenges facing our society. For instance, a research team at the University of California-San Diego/ SDSC is studying the molecular basis of Parkinson's disease. The researchers plan to use the knowledge they gain to discover new drugs to treat the disease and to identify risk factors for other diseases that are equally prevalent. Likewise, scientists from Pratt & Whitney are using the Blue Gene to understand the complex processes within aircraft engines. Expanding our understanding of jet engine combustors is the secret to improved fuel efficiency and reduced emissions. Lessons learned from the scientific simulations of jet engine combustors have already led Pratt & Whitney to newer designs with unprecedented reductions in emissions, noise, and cost of ownership. ALCF staff members provide in-depth expertise and assistance to those using the Blue Gene/L and optimizing user applications. Both the Catalyst and Applications Performance Engineering and Data Analytics (APEDA) teams support the users projects. In addition to working with scientists running experiments on the Blue Gene/L, we have become a nexus for the broader global community. In partnership with the Mathematics and Computer Science Division at Argonne National Laboratory, we have created an environment where the world's most challenging computational science problems can be addressed. Our expertise in high-end scientific computing enables us to provide

  17. Computer control applied to accelerators

    CERN Document Server

    Crowley-Milling, Michael C

    1974-01-01

    The differences that exist between control systems for accelerators and other types of control systems are outlined. It is further indicated that earlier accelerators had manual control systems to which computers were added, but that it is essential for the new, large accelerators to include computers in the control systems right from the beginning. Details of the computer control designed for the Super Proton Synchrotron are presented. The method of choosing the computers is described, as well as the reasons for CERN having to design the message transfer system. The items discussed include: CAMAC interface systems, a new multiplex system, operator-to-computer interaction (such as touch screen, computer-controlled knob, and non- linear track-ball), and high-level control languages. Brief mention is made of the contributions of other high-energy research laboratories as well as of some other computer control applications at CERN. (0 refs).

  18. Scientific computer simulation review

    International Nuclear Information System (INIS)

    Kaizer, Joshua S.; Heller, A. Kevin; Oberkampf, William L.

    2015-01-01

    Before the results of a scientific computer simulation are used for any purpose, it should be determined if those results can be trusted. Answering that question of trust is the domain of scientific computer simulation review. There is limited literature that focuses on simulation review, and most is specific to the review of a particular type of simulation. This work is intended to provide a foundation for a common understanding of simulation review. This is accomplished through three contributions. First, scientific computer simulation review is formally defined. This definition identifies the scope of simulation review and provides the boundaries of the review process. Second, maturity assessment theory is developed. This development clarifies the concepts of maturity criteria, maturity assessment sets, and maturity assessment frameworks, which are essential for performing simulation review. Finally, simulation review is described as the application of a maturity assessment framework. This is illustrated through evaluating a simulation review performed by the U.S. Nuclear Regulatory Commission. In making these contributions, this work provides a means for a more objective assessment of a simulation’s trustworthiness and takes the next step in establishing scientific computer simulation review as its own field. - Highlights: • We define scientific computer simulation review. • We develop maturity assessment theory. • We formally define a maturity assessment framework. • We describe simulation review as the application of a maturity framework. • We provide an example of a simulation review using a maturity framework

  19. Berkeley Lab Computing Sciences: Accelerating Scientific Discovery

    International Nuclear Information System (INIS)

    Hules, John A.

    2008-01-01

    Scientists today rely on advances in computer science, mathematics, and computational science, as well as large-scale computing and networking facilities, to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab's Computing Sciences organization researches, develops, and deploys new tools and technologies to meet these needs and to advance research in such areas as global climate change, combustion, fusion energy, nanotechnology, biology, and astrophysics

  20. Personal computers in accelerator control

    International Nuclear Information System (INIS)

    Anderssen, P.S.

    1988-01-01

    The advent of the personal computer has created a popular movement which has also made a strong impact on science and engineering. Flexible software environments combined with good computational performance and large storage capacities are becoming available at steadily decreasing costs. Of equal importance, however, is the quality of the user interface offered on many of these products. Graphics and screen interaction is available in ways that were only possible on specialized systems before. Accelerator engineers were quick to pick up the new technology. The first applications were probably for controllers and data gatherers for beam measurement equipment. Others followed, and today it is conceivable to make personal computer a standard component of an accelerator control system. This paper reviews the experience gained at CERN so far and describes the approach taken in the design of the common control center for the SPS and the future LEP accelerators. The design goal has been to be able to integrate personal computers into the accelerator control system and to build the operator's workplace around it. (orig.)

  1. Performance analysis and acceleration of explicit integration for large kinetic networks using batched GPU computations

    Energy Technology Data Exchange (ETDEWEB)

    Shyles, Daniel [University of Tennessee (UT); Dongarra, Jack J. [University of Tennessee, Knoxville (UTK); Guidry, Mike W. [ORNL; Tomov, Stanimire Z. [ORNL; Billings, Jay Jay [ORNL; Brock, Benjamin A. [ORNL; Haidar Ahmad, Azzam A. [ORNL

    2016-09-01

    Abstract—We demonstrate the systematic implementation of recently-developed fast explicit kinetic integration algorithms that solve efficiently N coupled ordinary differential equations (subject to initial conditions) on modern GPUs. We take representative test cases (Type Ia supernova explosions) and demonstrate two or more orders of magnitude increase in efficiency for solving such systems (of realistic thermonuclear networks coupled to fluid dynamics). This implies that important coupled, multiphysics problems in various scientific and technical disciplines that were intractable, or could be simulated only with highly schematic kinetic networks, are now computationally feasible. As examples of such applications we present the computational techniques developed for our ongoing deployment of these new methods on modern GPU accelerators. We show that similarly to many other scientific applications, ranging from national security to medical advances, the computation can be split into many independent computational tasks, each of relatively small-size. As the size of each individual task does not provide sufficient parallelism for the underlying hardware, especially for accelerators, these tasks must be computed concurrently as a single routine, that we call batched routine, in order to saturate the hardware with enough work.

  2. Pascal-SC a computer language for scientific computation

    CERN Document Server

    Bohlender, Gerd; von Gudenberg, Jürgen Wolff; Rheinboldt, Werner; Siewiorek, Daniel

    1987-01-01

    Perspectives in Computing, Vol. 17: Pascal-SC: A Computer Language for Scientific Computation focuses on the application of Pascal-SC, a programming language developed as an extension of standard Pascal, in scientific computation. The publication first elaborates on the introduction to Pascal-SC, a review of standard Pascal, and real floating-point arithmetic. Discussions focus on optimal scalar product, standard functions, real expressions, program structure, simple extensions, real floating-point arithmetic, vector and matrix arithmetic, and dynamic arrays. The text then examines functions a

  3. Delivering Insight The History of the Accelerated Strategic Computing Initiative

    Energy Technology Data Exchange (ETDEWEB)

    Larzelere II, A R

    2007-01-03

    The history of the Accelerated Strategic Computing Initiative (ASCI) tells of the development of computational simulation into a third fundamental piece of the scientific method, on a par with theory and experiment. ASCI did not invent the idea, nor was it alone in bringing it to fruition. But ASCI provided the wherewithal - hardware, software, environment, funding, and, most of all, the urgency - that made it happen. On October 1, 2005, the Initiative completed its tenth year of funding. The advances made by ASCI over its first decade are truly incredible. Lawrence Livermore, Los Alamos, and Sandia National Laboratories, along with leadership provided by the Department of Energy's Defense Programs Headquarters, fundamentally changed computational simulation and how it is used to enable scientific insight. To do this, astounding advances were made in simulation applications, computing platforms, and user environments. ASCI dramatically changed existing - and forged new - relationships, both among the Laboratories and with outside partners. By its tenth anniversary, despite daunting challenges, ASCI had accomplished all of the major goals set at its beginning. The history of ASCI is about the vision, leadership, endurance, and partnerships that made these advances possible.

  4. Using of opportunities of graphic processors for acceleration of scientific and technical calculations

    International Nuclear Information System (INIS)

    Dudnik, V.A.; Kudryavtsev, V.I.; Sereda, T.M.; Us, S.A.; Shestakov, M.V.

    2009-01-01

    The new opportunities of modern graphic processors (GPU) for acceleration of the scientific and technical calculations with the help of paralleling of a calculating task between the central processor and GPU are described. The description of using the technology NVIDIA CUDA for connection of parallel computing opportunities of GPU within the programme of the some intensive mathematical tasks is resulted. The examples of comparison of parameters of productivity in the process of these tasks' calculation without application of GPU and with use of opportunities NVIDIA CUDA for graphic processor GeForce 8800 are resulted

  5. High-End Scientific Computing

    Science.gov (United States)

    EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.

  6. Practical scientific computing

    CERN Document Server

    Muhammad, A

    2011-01-01

    Scientific computing is about developing mathematical models, numerical methods and computer implementations to study and solve real problems in science, engineering, business and even social sciences. Mathematical modelling requires deep understanding of classical numerical methods. This essential guide provides the reader with sufficient foundations in these areas to venture into more advanced texts. The first section of the book presents numEclipse, an open source tool for numerical computing based on the notion of MATLAB®. numEclipse is implemented as a plug-in for Eclipse, a leading integ

  7. Accelerating Climate Simulations Through Hybrid Computing

    Science.gov (United States)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  8. Exploring HPCS languages in scientific computing

    International Nuclear Information System (INIS)

    Barrett, R F; Alam, S R; Almeida, V F d; Bernholdt, D E; Elwasif, W R; Kuehn, J A; Poole, S W; Shet, A G

    2008-01-01

    As computers scale up dramatically to tens and hundreds of thousands of cores, develop deeper computational and memory hierarchies, and increased heterogeneity, developers of scientific software are increasingly challenged to express complex parallel simulations effectively and efficiently. In this paper, we explore the three languages developed under the DARPA High-Productivity Computing Systems (HPCS) program to help address these concerns: Chapel, Fortress, and X10. These languages provide a variety of features not found in currently popular HPC programming environments and make it easier to express powerful computational constructs, leading to new ways of thinking about parallel programming. Though the languages and their implementations are not yet mature enough for a comprehensive evaluation, we discuss some of the important features, and provide examples of how they can be used in scientific computing. We believe that these characteristics will be important to the future of high-performance scientific computing, whether the ultimate language of choice is one of the HPCS languages or something else

  9. Exploring HPCS languages in scientific computing

    Science.gov (United States)

    Barrett, R. F.; Alam, S. R.; Almeida, V. F. d.; Bernholdt, D. E.; Elwasif, W. R.; Kuehn, J. A.; Poole, S. W.; Shet, A. G.

    2008-07-01

    As computers scale up dramatically to tens and hundreds of thousands of cores, develop deeper computational and memory hierarchies, and increased heterogeneity, developers of scientific software are increasingly challenged to express complex parallel simulations effectively and efficiently. In this paper, we explore the three languages developed under the DARPA High-Productivity Computing Systems (HPCS) program to help address these concerns: Chapel, Fortress, and X10. These languages provide a variety of features not found in currently popular HPC programming environments and make it easier to express powerful computational constructs, leading to new ways of thinking about parallel programming. Though the languages and their implementations are not yet mature enough for a comprehensive evaluation, we discuss some of the important features, and provide examples of how they can be used in scientific computing. We believe that these characteristics will be important to the future of high-performance scientific computing, whether the ultimate language of choice is one of the HPCS languages or something else.

  10. Analogue computer display of accelerator beam optics

    International Nuclear Information System (INIS)

    Brand, K.

    1984-01-01

    Analogue computers have been used years ago by several authors for the design of magnetic beam handling systems. At Bochum a small analogue/hybrid computer was combined with a particular analogue expansion and logic control unit for beam transport work. This apparatus was very successful in the design and setup of the beam handling system of the tandem accelerator. The center of the stripper canal was the object point for the calculations, instead of the high energy acceleration tube a drift length was inserted into the program neglecting the weak focusing action of the tube. In the course of the installation of a second injector for heavy ions it became necessary to do better calculations. A simple method was found to represent accelerating sections on the computer and a particular way to simulate thin lenses was adopted. The analogue computer system proved its usefulness in the design and in studies of the characteristics of different accelerator installations over many years. The results of the calculations are in very good agreement with real accelerator data. The apparatus is the ideal tool to demonstrate beam optics to students and accelerator operators since the effect of a change of any of the parameters is immediately visible on the oscilloscope

  11. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    KAUST Repository

    Downes, Turlough P.

    2013-01-01

    As our understanding of the world around us increases it becomes more challenging to make use of what we already know, and to increase our understanding still further. Computational modeling and simulation have become critical tools in addressing this challenge. The requirements of high-resolution, accurate modeling have outstripped the ability of desktop computers and even small clusters to provide the necessary compute power. Many applications in the scientific and engineering domains now need very large amounts of compute time, while other applications, particularly in the life sciences, frequently have large data I/O requirements. There is thus a growing need for a range of high performance applications which can utilize parallel compute systems effectively, which have efficient data handling strategies and which have the capacity to utilise current and future systems. The High Performance and Scientific Applications topic aims to highlight recent progress in the use of advanced computing and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators, and to deal with difficult I/O requirements. © 2013 Springer-Verlag.

  12. Computers and Computation. Readings from Scientific American.

    Science.gov (United States)

    Fenichel, Robert R.; Weizenbaum, Joseph

    A collection of articles from "Scientific American" magazine has been put together at this time because the current period in computer science is one of consolidation rather than innovation. A few years ago, computer science was moving so swiftly that even the professional journals were more archival than informative; but today it is…

  13. Scientific computing

    CERN Document Server

    Trangenstein, John A

    2017-01-01

    This is the third of three volumes providing a comprehensive presentation of the fundamentals of scientific computing. This volume discusses topics that depend more on calculus than linear algebra, in order to prepare the reader for solving differential equations. This book and its companions show how to determine the quality of computational results, and how to measure the relative efficiency of competing methods. Readers learn how to determine the maximum attainable accuracy of algorithms, and how to select the best method for computing problems. This book also discusses programming in several languages, including C++, Fortran and MATLAB. There are 90 examples, 200 exercises, 36 algorithms, 40 interactive JavaScript programs, 91 references to software programs and 1 case study. Topics are introduced with goals, literature references and links to public software. There are descriptions of the current algorithms in GSLIB and MATLAB. This book could be used for a second course in numerical methods, for either ...

  14. Proceeding on the Scientific Meeting and Presentation on Accelerator Technology and Its Applications

    International Nuclear Information System (INIS)

    Susilo Widodo; Darsono; Slamet Santosa; Sudjatmoko; Tjipto Sujitno; Pramudita Anggraita; Wahini Nurhayati

    2015-11-01

    The scientific meeting and presentation on accelerator technology and its applications was held by PSTA BATAN on 30 November 2015. This meeting aims to promote the technology and its applications to accelerator scientists, academics, researchers and technology users as well as accelerator-based accelerator research that have been conducted by researchers in and outside BATAN. This proceeding contains 20 papers about physics and nuclear reactor. (PPIKSN)

  15. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  16. GPU-accelerated micromagnetic simulations using cloud computing

    Energy Technology Data Exchange (ETDEWEB)

    Jermain, C.L., E-mail: clj72@cornell.edu [Cornell University, Ithaca, NY 14853 (United States); Rowlands, G.E.; Buhrman, R.A. [Cornell University, Ithaca, NY 14853 (United States); Ralph, D.C. [Cornell University, Ithaca, NY 14853 (United States); Kavli Institute at Cornell, Ithaca, NY 14853 (United States)

    2016-03-01

    Highly parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics. - Highlights: • The benefits of cloud computing for GPU-accelerated micromagnetics are examined. • We present the MuCloud software for running simulations on cloud computing. • Simulation run times are measured to benchmark cloud computing performance. • Comparison benchmarks are analyzed between CPU and GPU based solvers.

  17. GPU-accelerated micromagnetic simulations using cloud computing

    International Nuclear Information System (INIS)

    Jermain, C.L.; Rowlands, G.E.; Buhrman, R.A.; Ralph, D.C.

    2016-01-01

    Highly parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics. - Highlights: • The benefits of cloud computing for GPU-accelerated micromagnetics are examined. • We present the MuCloud software for running simulations on cloud computing. • Simulation run times are measured to benchmark cloud computing performance. • Comparison benchmarks are analyzed between CPU and GPU based solvers.

  18. High-performance computing in accelerating structure design and analysis

    International Nuclear Information System (INIS)

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  19. The application of cloud computing to scientific workflows: a study of cost and performance.

    Science.gov (United States)

    Berriman, G Bruce; Deelman, Ewa; Juve, Gideon; Rynge, Mats; Vöckler, Jens-S

    2013-01-28

    The current model of transferring data from data centres to desktops for analysis will soon be rendered impractical by the accelerating growth in the volume of science datasets. Processing will instead often take place on high-performance servers co-located with data. Evaluations of how new technologies such as cloud computing would support such a new distributed computing model are urgently needed. Cloud computing is a new way of purchasing computing and storage resources on demand through virtualization technologies. We report here the results of investigations of the applicability of commercial cloud computing to scientific computing, with an emphasis on astronomy, including investigations of what types of applications can be run cheaply and efficiently on the cloud, and an example of an application well suited to the cloud: processing a large dataset to create a new science product.

  20. Accelerating artificial intelligence with reconfigurable computing

    Science.gov (United States)

    Cieszewski, Radoslaw

    Reconfigurable computing is emerging as an important area of research in computer architectures and software systems. Many algorithms can be greatly accelerated by placing the computationally intense portions of an algorithm into reconfigurable hardware. Reconfigurable computing combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be changed over the lifetime of the system. Similar to an ASIC, reconfigurable systems provide a method to map circuits into hardware. Reconfigurable systems therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Such a field, where there is many different algorithms which can be accelerated, is an artificial intelligence. This paper presents example hardware implementations of Artificial Neural Networks, Genetic Algorithms and Expert Systems.

  1. Proceeding of the Scientific Meeting and Presentation on Accelerator Technology and its Application

    International Nuclear Information System (INIS)

    Sudjatmoko; Anggraita, P.; Darsono; Sudiyanto; Kusminarto; Karyono

    1999-07-01

    The proceeding contains papers presented on Scientific Meeting and Presentation on Accelerator Technology and Its Application, held in Yogyakarta, 16 january 1996. This proceeding contains papers on accelerator technology, especially electron beam machine. There are 11 papers indexed individually. (ID)

  2. RXY/DRXY-a postprocessing graphical system for scientific computation

    International Nuclear Information System (INIS)

    Jin Qijie

    1990-01-01

    Scientific computing require computer graphical function for its visualization. The developing objects and functions of a postprocessing graphical system for scientific computation are described, and also briefly described its implementation

  3. Advanced Computing Tools and Models for Accelerator Physics

    International Nuclear Information System (INIS)

    Ryne, Robert; Ryne, Robert D.

    2008-01-01

    This paper is based on a transcript of my EPAC'08 presentation on advanced computing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics

  4. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  5. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  6. 6th International Conference on High Performance Scientific Computing

    CERN Document Server

    Phu, Hoang; Rannacher, Rolf; Schlöder, Johannes

    2017-01-01

    This proceedings volume highlights a selection of papers presented at the Sixth International Conference on High Performance Scientific Computing, which took place in Hanoi, Vietnam on March 16-20, 2015. The conference was jointly organized by the Heidelberg Institute of Theoretical Studies (HITS), the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University, and the Vietnam Institute for Advanced Study in Mathematics, Ministry of Education The contributions cover a broad, interdisciplinary spectrum of scientific computing and showcase recent advances in theory, methods, and practical applications. Subjects covered numerical simulation, methods for optimization and control, parallel computing, and software development, as well as the applications of scientific computing in physics, mechanics, biomechanics and robotics, material science, hydrology, biotechnology, medicine, transport, scheduling, and in...

  7. 'Patrimonialization' of contemporary scientific objects and their contexts of using: the Cockcroft-Walton particle accelerator

    International Nuclear Information System (INIS)

    Derolez, Severine

    2016-01-01

    Our thesis in science education explore the potentiality of contemporary scientific objects to become objects of heritage through a study case: the Cockcroft-Walton particle accelerator (CW). We call it 'patrimonialization' (the making of heritage) according to Information and Communication Sciences. We assume that the value of contemporary scientific objects, an important stage of the patrimonialization, is specific and requires a particular expertise. Our work was realized thanks to a french financing facility, allowing us to work with Le Musee des Confluences in Lyon (France), which exposes a particle accelerator CW in Societies: the human theater. The Institute of Lyon Nuclear physics (IPNL) also possesses a device of this type, who is the focus of a recognition project. sing a multidisciplinary approach, our analysis begin with a study of historical, epistemological, social and scientific context, of the first one CW. We also led two inquiries, the first one to redraw the accelerator CW trajectory in Lyon, the second to find all the accelerators CW in the world. Several studies have indeed highlighted the stages of the patrimonialization. Relying on these studies we characterized the specificities of the contemporary scientific heritage, and looked for tracks left by these characteristics in the exhibition text (museography). Our results demonstrate the lack of information allowing the interpretation (lecture) and the appropriation of the accelerator CW by a public, and invite us to question the representational contents of the object, conveyed through other contexts of use, formal or fictional. (author)

  8. Computer simulation of dynamic processes on accelerators

    International Nuclear Information System (INIS)

    Kol'ga, V.V.

    1979-01-01

    The problems of computer numerical investigation of motion of accelerated particles in accelerators and storages, an effect of different accelerator systems on the motion, determination of optimal characteristics of accelerated charged particle beams are considered. Various simulation representations are discussed which describe the accelerated particle dynamics, such as the enlarged particle method, the representation where a great number of discrete particle is substituted for a field of continuously distributed space charge, the method based on determination of averaged beam characteristics. The procedure is described of numerical studies involving the basic problems, viz. calculation of closed orbits, establishment of stability regions, investigation of resonance propagation determination of the phase stability region, evaluation of the space charge effect the problem of beam extraction. It is shown that most of such problems are reduced to solution of the Cauchy problem using a computer. The ballistic method which is applied to solution of the boundary value problem of beam extraction is considered. It is shown that introduction into the equation under study of additional members with the small positive regularization parameter is a general idea of the methods for regularization of noncorrect problems [ru

  9. Computing tools for accelerator design calculations

    International Nuclear Information System (INIS)

    Fischler, M.; Nash, T.

    1984-01-01

    This note is intended as a brief, summary guide for accelerator designers to the new generation of commercial and special processors that allow great increases in computing cost effectiveness. New thinking is required to take best advantage of these computing opportunities, in particular, when moving from analytical approaches to tracking simulations. In this paper, we outline the relevant considerations

  10. Computer codes for beam dynamics analysis of cyclotronlike accelerators

    Science.gov (United States)

    Smirnov, V.

    2017-12-01

    Computer codes suitable for the study of beam dynamics in cyclotronlike (classical and isochronous cyclotrons, synchrocyclotrons, and fixed field alternating gradient) accelerators are reviewed. Computer modeling of cyclotron segments, such as the central zone, acceleration region, and extraction system is considered. The author does not claim to give a full and detailed description of the methods and algorithms used in the codes. Special attention is paid to the codes already proven and confirmed at the existing accelerating facilities. The description of the programs prepared in the worldwide known accelerator centers is provided. The basic features of the programs available to users and limitations of their applicability are described.

  11. Accelerator simulation using computers

    International Nuclear Information System (INIS)

    Lee, M.; Zambre, Y.; Corbett, W.

    1992-01-01

    Every accelerator or storage ring system consists of a charged particle beam propagating through a beam line. Although a number of computer programs exits that simulate the propagation of a beam in a given beam line, only a few provide the capabilities for designing, commissioning and operating the beam line. This paper shows how a ''multi-track'' simulation and analysis code can be used for these applications

  12. The computer-based control system of the NAC accelerator

    International Nuclear Information System (INIS)

    Burdzik, G.F.; Bouckaert, R.F.A.; Cloete, I.; Du Toit, J.S.; Kohler, I.H.; Truter, J.N.J.; Visser, K.

    1982-01-01

    The National Accelerator Centre (NAC) of the CSIR is building a two-stage accelerator which will provide charged-particle beams for the use in medical and research applications. The control system for this accelerator is based on three mini-computers and a CAMAC interfacing network. Closed-loop control is being relegated to the various subsystems of the accelerators, and the computers and CAMAC network will be used in the first instance for data transfer, monitoring and servicing of the control consoles. The processing power of the computers will be utilized for automating start-up and beam-change procedures, for providing flexible and convenient information at the control consoles, for fault diagnosis and for beam-optimizing procedures. Tasks of a localized or dedicated nature are being off-loaded onto microcomputers, which are being used either in front-end devices or as slaves to the mini-computers. On the control consoles only a few instruments for setting and monitoring variables are being provided, but these instruments are universally-linkable to any appropriate machine variable

  13. Accelerating Climate and Weather Simulations through Hybrid Computing

    Science.gov (United States)

    Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark

    2011-01-01

    Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.

  14. Highly parallel machines and future of scientific computing

    International Nuclear Information System (INIS)

    Singh, G.S.

    1992-01-01

    Computing requirement of large scale scientific computing has always been ahead of what state of the art hardware could supply in the form of supercomputers of the day. And for any single processor system the limit to increase in the computing power was realized a few years back itself. Now with the advent of parallel computing systems the availability of machines with the required computing power seems a reality. In this paper the author tries to visualize the future large scale scientific computing in the penultimate decade of the present century. The author summarized trends in parallel computers and emphasize the need for a better programming environment and software tools for optimal performance. The author concludes this paper with critique on parallel architectures, software tools and algorithms. (author). 10 refs., 2 tabs

  15. Accelerating Scientific Applications using High Performance Dense and Sparse Linear Algebra Kernels on GPUs

    KAUST Repository

    Abdelfattah, Ahmad

    2015-01-15

    High performance computing (HPC) platforms are evolving to more heterogeneous configurations to support the workloads of various applications. The current hardware landscape is composed of traditional multicore CPUs equipped with hardware accelerators that can handle high levels of parallelism. Graphical Processing Units (GPUs) are popular high performance hardware accelerators in modern supercomputers. GPU programming has a different model than that for CPUs, which means that many numerical kernels have to be redesigned and optimized specifically for this architecture. GPUs usually outperform multicore CPUs in some compute intensive and massively parallel applications that have regular processing patterns. However, most scientific applications rely on crucial memory-bound kernels and may witness bottlenecks due to the overhead of the memory bus latency. They can still take advantage of the GPU compute power capabilities, provided that an efficient architecture-aware design is achieved. This dissertation presents a uniform design strategy for optimizing critical memory-bound kernels on GPUs. Based on hierarchical register blocking, double buffering and latency hiding techniques, this strategy leverages the performance of a wide range of standard numerical kernels found in dense and sparse linear algebra libraries. The work presented here focuses on matrix-vector multiplication kernels (MVM) as repre- sentative and most important memory-bound operations in this context. Each kernel inherits the benefits of the proposed strategies. By exposing a proper set of tuning parameters, the strategy is flexible enough to suit different types of matrices, ranging from large dense matrices, to sparse matrices with dense block structures, while high performance is maintained. Furthermore, the tuning parameters are used to maintain the relative performance across different GPU architectures. Multi-GPU acceleration is proposed to scale the performance on several devices. The

  16. FPGA-accelerated simulation of computer systems

    CERN Document Server

    Angepat, Hari; Chung, Eric S; Hoe, James C; Chung, Eric S

    2014-01-01

    To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed f

  17. OPENING REMARKS: SciDAC: Scientific Discovery through Advanced Computing

    Science.gov (United States)

    Strayer, Michael

    2005-01-01

    with industry and virtual prototyping. New instruments of collaboration will include institutes and centers while summer schools, workshops and outreach will invite new talent and expertise. Computational science adds new dimensions to science and its practice. Disciplines of fusion, accelerator science, and combustion are poised to blur the boundaries between pure and applied science. As we open the door into FY2006 we shall see a landscape of new scientific challenges: in biology, chemistry, materials, and astrophysics to name a few. The enabling technologies of SciDAC have been transformational as drivers of change. Planning for major new software systems assumes a base line employing Common Component Architectures and this has become a household word for new software projects. While grid algorithms and mesh refinement software have transformed applications software, data management and visualization have transformed our understanding of science from data. The Gordon Bell prize now seems to be dominated by computational science and solvers developed by TOPS ISIC. The priorities of the Office of Science in the Department of Energy are clear. The 20 year facilities plan is driven by new science. High performance computing is placed amongst the two highest priorities. Moore's law says that by the end of the next cycle of SciDAC we shall have peta-flop computers. The challenges of petascale computing are enormous. These and the associated computational science are the highest priorities for computing within the Office of Science. Our effort in Leadership Class computing is just a first step towards this goal. Clearly, computational science at this scale will face enormous challenges and possibilities. Performance evaluation and prediction will be critical to unraveling the needed software technologies. We must not lose sight of our overarching goal—that of scientific discovery. Science does not stand still and the landscape of science discovery and computing holds

  18. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  19. Computer programs in accelerator physics

    International Nuclear Information System (INIS)

    Keil, E.

    1984-01-01

    Three areas of accelerator physics are discussed in which computer programs have been applied with much success: i) single-particle beam dynamics in circular machines, i.e. the design and matching of machine lattices; ii) computations of electromagnetic fields in RF cavities and similar objects, useful for the design of RF cavities and for the calculation of wake fields; iii) simulation of betatron and synchrotron oscillations in a machine with non-linear elements, e.g. sextupoles, and of bunch lengthening due to longitudinal wake fields. (orig.)

  20. A multipurpose accelerator facility for Kharkov National Scientific Center

    International Nuclear Information System (INIS)

    Bulyak, E.; Dolbnya, A.; Gladkikh, P.; Karnaukhov, I.; Kononenko, S.; Kozin, V.; Lapshin, V.; Mytsykov, A.; Peev, F.; Shcherbakov, A.; Tarasenko, A.; Telegin, Yu.; Zelinsky, A.

    2000-01-01

    The project of the multifunctional accelerator storage ring complex with electron energy of up to 2 GeV is described. The lattice of the complex was chosen taking into account of the existing equipment, layout of the buildings, and infrastructure of the 2 GeV electron linear accelerator, the necessity of obtaining precise parameters of photon and electron beams, and the economic efficiency. The principle parameters of the storage ring are the circumference of 91 m, the energy range 0.3-2.0 GeV, the natural beam emittance 25 nm and the stored beam current 0.5 A. This complex are provided with photon beams (6-7 beam lines at first stage, up to 20 later on) and CW electron beams (energy region 0.3-0.5 GeV) for scientific and industrial application

  1. A multipurpose accelerator facility for Kharkov National Scientific Center

    Energy Technology Data Exchange (ETDEWEB)

    Bulyak, E.; Dolbnya, A.; Gladkikh, P.; Karnaukhov, I.; Kononenko, S.; Kozin, V.; Lapshin, V.; Mytsykov, A.; Peev, F.; Shcherbakov, A. E-mail: shcherbakov@kipt.kharkov.ua; Tarasenko, A.; Telegin, Yu.; Zelinsky, A

    2000-06-21

    The project of the multifunctional accelerator storage ring complex with electron energy of up to 2 GeV is described. The lattice of the complex was chosen taking into account of the existing equipment, layout of the buildings, and infrastructure of the 2 GeV electron linear accelerator, the necessity of obtaining precise parameters of photon and electron beams, and the economic efficiency. The principle parameters of the storage ring are the circumference of 91 m, the energy range 0.3-2.0 GeV, the natural beam emittance 25 nm and the stored beam current 0.5 A. This complex are provided with photon beams (6-7 beam lines at first stage, up to 20 later on) and CW electron beams (energy region 0.3-0.5 GeV) for scientific and industrial application.

  2. A multipurpose accelerator facility for Kharkov National Scientific Center

    CERN Document Server

    Bulyak, E V; Gladkikh, P; Karnaukhov, I; Kononenko, S; Kozin, V; Lapshin, V G; Mytsykov, A; Peev, F; Shcherbakov, A; Tarasenko, A; Telegin, Yu P; Zelinsky, A

    2000-01-01

    The project of the multifunctional accelerator storage ring complex with electron energy of up to 2 GeV is described. The lattice of the complex was chosen taking into account of the existing equipment, layout of the buildings, and infrastructure of the 2 GeV electron linear accelerator, the necessity of obtaining precise parameters of photon and electron beams, and the economic efficiency. The principle parameters of the storage ring are the circumference of 91 m, the energy range 0.3-2.0 GeV, the natural beam emittance 25 nm and the stored beam current 0.5 A. This complex are provided with photon beams (6-7 beam lines at first stage, up to 20 later on) and CW electron beams (energy region 0.3-0.5 GeV) for scientific and industrial application.

  3. Scientific computing an introduction using Maple and Matlab

    CERN Document Server

    Gander, Walter; Kwok, Felix

    2014-01-01

    Scientific computing is the study of how to use computers effectively to solve problems that arise from the mathematical modeling of phenomena in science and engineering. It is based on mathematics, numerical and symbolic/algebraic computations and visualization. This book serves as an introduction to both the theory and practice of scientific computing, with each chapter presenting the basic algorithms that serve as the workhorses of many scientific codes; we explain both the theory behind these algorithms and how they must be implemented in order to work reliably in finite-precision arithmetic. The book includes many programs written in Matlab and Maple – Maple is often used to derive numerical algorithms, whereas Matlab is used to implement them. The theory is developed in such a way that students can learn by themselves as they work through the text. Each chapter contains numerous examples and problems to help readers understand the material “hands-on”.

  4. Introduction to the LaRC central scientific computing complex

    Science.gov (United States)

    Shoosmith, John N.

    1993-01-01

    The computers and associated equipment that make up the Central Scientific Computing Complex of the Langley Research Center are briefly described. The electronic networks that provide access to the various components of the complex and a number of areas that can be used by Langley and contractors staff for special applications (scientific visualization, image processing, software engineering, and grid generation) are also described. Flight simulation facilities that use the central computers are described. Management of the complex, procedures for its use, and available services and resources are discussed. This document is intended for new users of the complex, for current users who wish to keep appraised of changes, and for visitors who need to understand the role of central scientific computers at Langley.

  5. Proceeding on the scientific meeting and presentation on accelerator technology and its applications: physics, nuclear reactor

    International Nuclear Information System (INIS)

    Pramudita Anggraita; Sudjatmoko; Darsono; Tri Marji Atmono; Tjipto Sujitno; Wahini Nurhayati

    2012-01-01

    The scientific meeting and presentation on accelerator technology and its applications was held by PTAPB BATAN on 13 December 2011. This meeting aims to promote the technology and its applications to accelerator scientists, academics, researchers and technology users as well as accelerator-based accelerator research that have been conducted by researchers in and outside BATAN. This proceeding contains 23 papers about physics and nuclear reactor. (PPIKSN)

  6. Computer simulations of compact toroid formation and acceleration

    International Nuclear Information System (INIS)

    Peterkin, R.E. Jr.; Sovinec, C.R.

    1990-01-01

    Experiments to form, accelerate, and focus compact toroid plasmas will be performed on the 9.4 MJ SHIVA STAR fast capacitor bank at the Air Force Weapons Laboratory during the 1990. The MARAUDER (magnetically accelerated rings to achieve ultrahigh directed energy and radiation) program is a research effort to accelerate magnetized plasma rings with the masses between 0.1 and 1.0 mg to velocities above 10 8 cm/sec and energies above 1 MJ. Research on these high-velocity compact toroids may lead to development of very fast opening switches, high-power microwave sources, and an alternative path to inertial confinement fusion. Design of a compact toroid accelerator experiment on the SHIVA STAR capacitor bank is underway, and computer simulations with the 2 1/2-dimensional magnetohydrodynamics code, MACH2, have been performed to guide this endeavor. The compact toroids are produced in a magnetized coaxial plasma gun, and the acceleration will occur in a configuration similar to a coaxial railgun. Detailed calculations of formation and equilibration of a low beta magnetic force-free configuration (curl B = kB) have been performed with MACH2. In this paper, the authors discuss computer simulations of the focusing and acceleration of the toroid

  7. High-performance scientific computing in the cloud

    Science.gov (United States)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  8. A high performance scientific cloud computing environment for materials simulations

    OpenAIRE

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  9. Scientific applications of symbolic computation

    International Nuclear Information System (INIS)

    Hearn, A.C.

    1976-02-01

    The use of symbolic computation systems for problem solving in scientific research is reviewed. The nature of the field is described, and particular examples are considered from celestial mechanics, quantum electrodynamics and general relativity. Symbolic integration and some more recent applications of algebra systems are also discussed [fr

  10. US DOE Grand Challenge in Computational Accelerator Physics

    International Nuclear Information System (INIS)

    Ryne, R.; Habib, S.; Qiang, J.; Ko, K.; Li, Z.; McCandless, B.; Mi, W.; Ng, C.; Saparov, M.; Srinivas, V.; Sun, Y.; Zhan, X.; Decyk, V.; Golub, G.

    1998-01-01

    Particle accelerators are playing an increasingly important role in basic and applied science, and are enabling new accelerator-driven technologies. But the design of next-generation accelerators, such as linear colliders and high intensity linacs, will require a major advance in numerical modeling capability due to extremely stringent beam control and beam loss requirements, and the presence of highly complex three-dimensional accelerator components. To address this situation, the U.S. Department of Energy has approved a ''Grand Challenge'' in Computational Accelerator Physics, whose primary goal is to develop a parallel modeling capability that will enable high performance, large scale simulations for the design, optimization, and numerical validation of next-generation accelerators. In this paper we report on the status of the Grand Challenge

  11. FPS scientific and supercomputers computers in chemistry

    International Nuclear Information System (INIS)

    Curington, I.J.

    1987-01-01

    FPS Array Processors, scientific computers, and highly parallel supercomputers are used in nearly all aspects of compute-intensive computational chemistry. A survey is made of work utilizing this equipment, both published and current research. The relationship of the computer architecture to computational chemistry is discussed, with specific reference to Molecular Dynamics, Quantum Monte Carlo simulations, and Molecular Graphics applications. Recent installations of the FPS T-Series are highlighted, and examples of Molecular Graphics programs running on the FPS-5000 are shown

  12. HPCToolkit: performance tools for scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Tallent, N; Mellor-Crummey, J; Adhianto, L; Fagan, M; Krentel, M [Department of Computer Science, Rice University, Houston, TX 77005 (United States)

    2008-07-15

    As part of the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) program, science teams are tackling problems that require simulation and modeling on petascale computers. As part of activities associated with the SciDAC Center for Scalable Application Development Software (CScADS) and the Performance Engineering Research Institute (PERI), Rice University is building software tools for performance analysis of scientific applications on the leadership-class platforms. In this poster abstract, we briefly describe the HPCToolkit performance tools and how they can be used to pinpoint bottlenecks in SPMD and multi-threaded parallel codes. We demonstrate HPCToolkit's utility by applying it to two SciDAC applications: the S3D code for simulation of turbulent combustion and the MFDn code for ab initio calculations of microscopic structure of nuclei.

  13. HPCToolkit: performance tools for scientific computing

    International Nuclear Information System (INIS)

    Tallent, N; Mellor-Crummey, J; Adhianto, L; Fagan, M; Krentel, M

    2008-01-01

    As part of the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) program, science teams are tackling problems that require simulation and modeling on petascale computers. As part of activities associated with the SciDAC Center for Scalable Application Development Software (CScADS) and the Performance Engineering Research Institute (PERI), Rice University is building software tools for performance analysis of scientific applications on the leadership-class platforms. In this poster abstract, we briefly describe the HPCToolkit performance tools and how they can be used to pinpoint bottlenecks in SPMD and multi-threaded parallel codes. We demonstrate HPCToolkit's utility by applying it to two SciDAC applications: the S3D code for simulation of turbulent combustion and the MFDn code for ab initio calculations of microscopic structure of nuclei

  14. National Energy Research Scientific Computing Center (NERSC): Advancing the frontiers of computational science and technology

    Energy Technology Data Exchange (ETDEWEB)

    Hules, J. [ed.

    1996-11-01

    National Energy Research Scientific Computing Center (NERSC) provides researchers with high-performance computing tools to tackle science`s biggest and most challenging problems. Founded in 1974 by DOE/ER, the Controlled Thermonuclear Research Computer Center was the first unclassified supercomputer center and was the model for those that followed. Over the years the center`s name was changed to the National Magnetic Fusion Energy Computer Center and then to NERSC; it was relocated to LBNL. NERSC, one of the largest unclassified scientific computing resources in the world, is the principal provider of general-purpose computing services to DOE/ER programs: Magnetic Fusion Energy, High Energy and Nuclear Physics, Basic Energy Sciences, Health and Environmental Research, and the Office of Computational and Technology Research. NERSC users are a diverse community located throughout US and in several foreign countries. This brochure describes: the NERSC advantage, its computational resources and services, future technologies, scientific resources, and computational science of scale (interdisciplinary research over a decade or longer; examples: combustion in engines, waste management chemistry, global climate change modeling).

  15. A high performance scientific cloud computing environment for materials simulations

    Science.gov (United States)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  16. Top scientific research center deploys Zambeel Aztera (TM) network storage system in high performance environment

    CERN Multimedia

    2002-01-01

    " The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory has implemented a Zambeel Aztera storage system and software to accelerate the productivity of scientists running high performance scientific simulations and computations" (1 page).

  17. Scientific Computing in Electrical Engineering

    CERN Document Server

    Amrhein, Wolfgang; Zulehner, Walter

    2018-01-01

    This collection of selected papers presented at the 11th International Conference on Scientific Computing in Electrical Engineering (SCEE), held in St. Wolfgang, Austria, in 2016, showcases the state of the art in SCEE. The aim of the SCEE 2016 conference was to bring together scientists from academia and industry, mathematicians, electrical engineers, computer scientists, and physicists, and to promote intensive discussions on industrially relevant mathematical problems, with an emphasis on the modeling and numerical simulation of electronic circuits and devices, electromagnetic fields, and coupled problems. The focus in methodology was on model order reduction and uncertainty quantification. This extensive reference work is divided into six parts: Computational Electromagnetics, Circuit and Device Modeling and Simulation, Coupled Problems and Multi‐Scale Approaches in Space and Time, Mathematical and Computational Methods Including Uncertainty Quantification, Model Order Reduction, and Industrial Applicat...

  18. Symbolic mathematical computing: orbital dynamics and application to accelerators

    International Nuclear Information System (INIS)

    Fateman, R.

    1986-01-01

    Computer-assisted symbolic mathematical computation has become increasingly useful in applied mathematics. A brief introduction to such capabilitites and some examples related to orbital dynamics and accelerator physics are presented. (author)

  19. Computer-based training for particle accelerator personnel

    International Nuclear Information System (INIS)

    Silbar, R.R.

    1999-01-01

    A continuing problem at many laboratories is the training of new operators in the arcane technology of particle accelerators. Presently most of this training occurs on the job, under a mentor. Such training is expensive, and while it provides operational experience, it is frequently lax in providing the physics background needed to truly understand accelerator systems. Using computers in a self-paced, interactive environment can be more effective in meeting this training need. copyright 1999 American Institute of Physics

  20. Advanced scientific computational methods and their applications to nuclear technologies. (4) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (4)

    International Nuclear Information System (INIS)

    Sekimura, Naoto; Okita, Taira

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the fourth issue showing the overview of scientific computational methods with the introduction of continuum simulation methods and their applications. Simulation methods on physical radiation effects on materials are reviewed based on the process such as binary collision approximation, molecular dynamics, kinematic Monte Carlo method, reaction rate method and dislocation dynamics. (T. Tanaka)

  1. A method to mine workflows from provenance for assisting scientific workflow composition

    NARCIS (Netherlands)

    Zeng, R.; He, X.; Aalst, van der W.M.P.

    2011-01-01

    Scientific workflows have recently emerged as a new paradigm for representing and managing complex distributed scientific computations and are used to accelerate the pace of scientific discovery. In many disciplines, individual workflows are large and complicated due to the large quantities of data

  2. A heterogeneous computing accelerated SCE-UA global optimization method using OpenMP, OpenCL, CUDA, and OpenACC.

    Science.gov (United States)

    Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Liang, Ke; Hong, Yang

    2017-10-01

    The shuffled complex evolution optimization developed at the University of Arizona (SCE-UA) has been successfully applied in various kinds of scientific and engineering optimization applications, such as hydrological model parameter calibration, for many years. The algorithm possesses good global optimality, convergence stability and robustness. However, benchmark and real-world applications reveal the poor computational efficiency of the SCE-UA. This research aims at the parallelization and acceleration of the SCE-UA method based on powerful heterogeneous computing technology. The parallel SCE-UA is implemented on Intel Xeon multi-core CPU (by using OpenMP and OpenCL) and NVIDIA Tesla many-core GPU (by using OpenCL, CUDA, and OpenACC). The serial and parallel SCE-UA were tested based on the Griewank benchmark function. Comparison results indicate the parallel SCE-UA significantly improves computational efficiency compared to the original serial version. The OpenCL implementation obtains the best overall acceleration results however, with the most complex source code. The parallel SCE-UA has bright prospects to be applied in real-world applications.

  3. Computational needs for the RIA accelerator systems

    International Nuclear Information System (INIS)

    Ostroumov, P.N.; Nolen, J.A.; Mustapha, B.

    2006-01-01

    This paper discusses the computational needs for the full design and simulation of the RIA accelerator systems. Beam dynamics simulations are essential to first define and optimize the architectural design for both the driver linac and the post-accelerator. They are also important to study different design options and various off-normal modes in order to decide on the most-performing and cost-effective design. Due to the high-intensity primary beams, the beam-stripper interaction is a source of both radioactivation and beam contamination and should be carefully investigated and simulated for proper beam collimation and shielding. The targets and fragment separators area needs also very special attention in order to reduce any radiological hazards by careful shielding design. For all these simulations parallel computing is an absolute necessity

  4. Component-based software for high-performance scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  5. Component-based software for high-performance scientific computing

    International Nuclear Information System (INIS)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly

  6. Scientific Challenges for Understanding the Quantum Universe

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2009-10-16

    A workshop titled "Scientific Challenges for Understanding the Quantum Universe" was held December 9-11, 2008, at the Kavli Institute for Particle Astrophysics and Cosmology at the Stanford Linear Accelerator Center-National Accelerator Laboratory. The primary purpose of the meeting was to examine how computing at the extreme scale can contribute to meeting forefront scientific challenges in particle physics, particle astrophysics and cosmology. The workshop was organized around five research areas with associated panels. Three of these, "High Energy Theoretical Physics," "Accelerator Simulation," and "Experimental Particle Physics," addressed research of the Office of High Energy Physics’ Energy and Intensity Frontiers, while the"Cosmology and Astrophysics Simulation" and "Astrophysics Data Handling, Archiving, and Mining" panels were associated with the Cosmic Frontier.

  7. A method to build and analyze scientific workflows from provenance through process mining

    NARCIS (Netherlands)

    Zeng, R.; He, X.; Li, Jiafei; Liu, Zheng; Aalst, van der W.M.P.

    2011-01-01

    Scientific workflows have recently emerged as a new paradigm for representing and managing complex distributed scientific computations and are used to accelerate the pace of scientific discovery. In many disciplines, individual workflows are large due to the large quantities of data used. As

  8. Software Defects, Scientific Computation and the Scientific Method

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    Computation has rapidly grown in the last 50 years so that in many scientific areas it is the dominant partner in the practice of science. Unfortunately, unlike the experimental sciences, it does not adhere well to the principles of the scientific method as espoused by, for example, the philosopher Karl Popper. Such principles are built around the notions of deniability and reproducibility. Although much research effort has been spent on measuring the density of software defects, much less has been spent on the more difficult problem of measuring their effect on the output of a program. This talk explores these issues with numerous examples suggesting how this situation might be improved to match the demands of modern science. Finally it develops a theoretical model based on an amalgam of statistical mechanics and Hartley/Shannon information theory which suggests that software systems have strong implementation independent behaviour and supports the widely observed phenomenon that defects clust...

  9. Software Engineering for Scientific Computer Simulations

    Science.gov (United States)

    Post, Douglass E.; Henderson, Dale B.; Kendall, Richard P.; Whitney, Earl M.

    2004-11-01

    Computer simulation is becoming a very powerful tool for analyzing and predicting the performance of fusion experiments. Simulation efforts are evolving from including only a few effects to many effects, from small teams with a few people to large teams, and from workstations and small processor count parallel computers to massively parallel platforms. Successfully making this transition requires attention to software engineering issues. We report on the conclusions drawn from a number of case studies of large scale scientific computing projects within DOE, academia and the DoD. The major lessons learned include attention to sound project management including setting reasonable and achievable requirements, building a good code team, enforcing customer focus, carrying out verification and validation and selecting the optimum computational mathematics approaches.

  10. Computing on Knights and Kepler Architectures

    International Nuclear Information System (INIS)

    Bortolotti, G; Caberletti, M; Ferraro, A; Giacomini, F; Manzali, M; Maron, G; Salomoni, D; Crimi, G; Zanella, M

    2014-01-01

    A recent trend in scientific computing is the increasingly important role of co-processors, originally built to accelerate graphics rendering, and now used for general high-performance computing. The INFN Computing On Knights and Kepler Architectures (COKA) project focuses on assessing the suitability of co-processor boards for scientific computing in a wide range of physics applications, and on studying the best programming methodologies for these systems. Here we present in a comparative way our results in porting a Lattice Boltzmann code on two state-of-the-art accelerators: the NVIDIA K20X, and the Intel Xeon-Phi. We describe our implementations, analyze results and compare with a baseline architecture adopting Intel Sandy Bridge CPUs.

  11. GPU-accelerated computation of electron transfer.

    Science.gov (United States)

    Höfinger, Siegfried; Acocella, Angela; Pop, Sergiu C; Narumi, Tetsu; Yasuoka, Kenji; Beu, Titus; Zerbetto, Francesco

    2012-11-05

    Electron transfer is a fundamental process that can be studied with the help of computer simulation. The underlying quantum mechanical description renders the problem a computationally intensive application. In this study, we probe the graphics processing unit (GPU) for suitability to this type of problem. Time-critical components are identified via profiling of an existing implementation and several different variants are tested involving the GPU at increasing levels of abstraction. A publicly available library supporting basic linear algebra operations on the GPU turns out to accelerate the computation approximately 50-fold with minor dependence on actual problem size. The performance gain does not compromise numerical accuracy and is of significant value for practical purposes. Copyright © 2012 Wiley Periodicals, Inc.

  12. A Computing Environment to Support Repeatable Scientific Big Data Experimentation of World-Wide Scientific Literature

    Energy Technology Data Exchange (ETDEWEB)

    Schlicher, Bob G [ORNL; Kulesz, James J [ORNL; Abercrombie, Robert K [ORNL; Kruse, Kara L [ORNL

    2015-01-01

    A principal tenant of the scientific method is that experiments must be repeatable and relies on ceteris paribus (i.e., all other things being equal). As a scientific community, involved in data sciences, we must investigate ways to establish an environment where experiments can be repeated. We can no longer allude to where the data comes from, we must add rigor to the data collection and management process from which our analysis is conducted. This paper describes a computing environment to support repeatable scientific big data experimentation of world-wide scientific literature, and recommends a system that is housed at the Oak Ridge National Laboratory in order to provide value to investigators from government agencies, academic institutions, and industry entities. The described computing environment also adheres to the recently instituted digital data management plan mandated by multiple US government agencies, which involves all stages of the digital data life cycle including capture, analysis, sharing, and preservation. It particularly focuses on the sharing and preservation of digital research data. The details of this computing environment are explained within the context of cloud services by the three layer classification of Software as a Service , Platform as a Service , and Infrastructure as a Service .

  13. Quantum computing accelerator I/O : LDRD 52750 final report

    International Nuclear Information System (INIS)

    Schroeppel, Richard Crabtree; Modine, Normand Arthur; Ganti, Anand; Pierson, Lyndon George; Tigges, Christopher P.

    2003-01-01

    In a superposition of quantum states, a bit can be in both the states '0' and '1' at the same time. This feature of the quantum bit or qubit has no parallel in classical systems. Currently, quantum computers consisting of 4 to 7 qubits in a 'quantum computing register' have been built. Innovative algorithms suited to quantum computing are now beginning to emerge, applicable to sorting and cryptanalysis, and other applications. A framework for overcoming slightly inaccurate quantum gate interactions and for causing quantum states to survive interactions with surrounding environment is emerging, called quantum error correction. Thus there is the potential for rapid advances in this field. Although quantum information processing can be applied to secure communication links (quantum cryptography) and to crack conventional cryptosystems, the first few computing applications will likely involve a 'quantum computing accelerator' similar to a 'floating point arithmetic accelerator' interfaced to a conventional Von Neumann computer architecture. This research is to develop a roadmap for applying Sandia's capabilities to the solution of some of the problems associated with maintaining quantum information, and with getting data into and out of such a 'quantum computing accelerator'. We propose to focus this work on 'quantum I/O technologies' by applying quantum optics on semiconductor nanostructures to leverage Sandia's expertise in semiconductor microelectronic/photonic fabrication techniques, as well as its expertise in information theory, processing, and algorithms. The work will be guided by understanding of practical requirements of computing and communication architectures. This effort will incorporate ongoing collaboration between 9000, 6000 and 1000 and between junior and senior personnel. Follow-on work to fabricate and evaluate appropriate experimental nano/microstructures will be proposed as a result of this work

  14. Mathematical model of accelerator output characteristics and their calculation on a computer

    International Nuclear Information System (INIS)

    Mishulina, O.A.; Ul'yanina, M.N.; Kornilova, T.V.

    1975-01-01

    A mathematical model is described of output characteristics of a linear accelerator. The model is a system of differential equations. Presence of phase limitations is a specific feature of setting the problem which makes it possible to ensure higher simulation accuracy and determine a capture coefficient. An algorithm is elaborated of computing output characteristics based upon the mathematical model suggested. A capture coefficient, coordinate expectation characterizing an average phase value of the beam particles, coordinate expectation characterizing an average value of the reverse relative velocity of the beam particles as well as dispersion of these coordinates are output characteristics of the accelerator. Calculation methods of the accelerator output characteristics are described in detail. The computations have been performed on the BESM-6 computer, the characteristics computing time being 2 min 20 sec. Relative error of parameter computation averages 10 -2

  15. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Belov, S; Kaplin, V; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2011-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  16. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Peisert, Sean [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Davis, CA (United States); Potok, Thomas E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jones, Todd [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-03

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues included research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the

  17. The Extrapolation-Accelerated Multilevel Aggregation Method in PageRank Computation

    Directory of Open Access Journals (Sweden)

    Bing-Yuan Pu

    2013-01-01

    Full Text Available An accelerated multilevel aggregation method is presented for calculating the stationary probability vector of an irreducible stochastic matrix in PageRank computation, where the vector extrapolation method is its accelerator. We show how to periodically combine the extrapolation method together with the multilevel aggregation method on the finest level for speeding up the PageRank computation. Detailed numerical results are given to illustrate the behavior of this method, and comparisons with the typical methods are also made.

  18. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Subcommittee Report on Scientific and Technical Information

    Energy Technology Data Exchange (ETDEWEB)

    Hey, Tony [eScience Institute, University of Washington; Agarwal, Deborah [Lawrence Berkeley National Laboratory; Borgman, Christine [University of California, Los Angeles; Cartaro, Concetta [SLAC National Accelerator Laboratory; Crivelli, Silvia [Lawrence Berkeley National Laboratory; Van Dam, Kerstin Kleese [Pacific Northwest National Laboratory; Luce, Richard [University of Oklahoma; Arjun, Shankar [CADES, Oak Ridge National Laboratory; Trefethen, Anne [University of Oxford; Wade, Alex [Microsoft Research, Microsoft Corporation; Williams, Dean [Lawrence Livermore National Laboratory

    2015-09-04

    The Advanced Scientific Computing Advisory Committee (ASCAC) was charged to form a standing subcommittee to review the Department of Energy’s Office of Scientific and Technical Information (OSTI) and to begin by assessing the quality and effectiveness of OSTI’s recent and current products and services and to comment on its mission and future directions in the rapidly changing environment for scientific publication and data. The Committee met with OSTI staff and reviewed available products, services and other materials. This report summaries their initial findings and recommendations.

  19. The Potential of the Cell Processor for Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel; Shalf, John; Oliker, Leonid; Husbands, Parry; Kamil, Shoaib; Yelick, Katherine

    2005-10-14

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of the using the forth coming STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. We are the first to present quantitative Cell performance data on scientific kernels and show direct comparisons against leading superscalar (AMD Opteron), VLIW (IntelItanium2), and vector (Cray X1) architectures. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop both analytical models and simulators to predict kernel performance. Our work also explores the complexity of mapping several important scientific algorithms onto the Cells unique architecture. Additionally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  20. Efficient Machine Learning Approach for Optimizing Scientific Computing Applications on Emerging HPC Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Arumugam, Kamesh [Old Dominion Univ., Norfolk, VA (United States)

    2017-05-01

    Efficient parallel implementations of scientific applications on multi-core CPUs with accelerators such as GPUs and Xeon Phis is challenging. This requires - exploiting the data parallel architecture of the accelerator along with the vector pipelines of modern x86 CPU architectures, load balancing, and efficient memory transfer between different devices. It is relatively easy to meet these requirements for highly structured scientific applications. In contrast, a number of scientific and engineering applications are unstructured. Getting performance on accelerators for these applications is extremely challenging because many of these applications employ irregular algorithms which exhibit data-dependent control-ow and irregular memory accesses. Furthermore, these applications are often iterative with dependency between steps, and thus making it hard to parallelize across steps. As a result, parallelism in these applications is often limited to a single step. Numerical simulation of charged particles beam dynamics is one such application where the distribution of work and memory access pattern at each time step is irregular. Applications with these properties tend to present significant branch and memory divergence, load imbalance between different processor cores, and poor compute and memory utilization. Prior research on parallelizing such irregular applications have been focused around optimizing the irregular, data-dependent memory accesses and control-ow during a single step of the application independent of the other steps, with the assumption that these patterns are completely unpredictable. We observed that the structure of computation leading to control-ow divergence and irregular memory accesses in one step is similar to that in the next step. It is possible to predict this structure in the current step by observing the computation structure of previous steps. In this dissertation, we present novel machine learning based optimization techniques to address

  1. The Y2K program for scientific-analysis computer programs at AECL

    International Nuclear Information System (INIS)

    Popovic, J.; Gaver, C.; Chapman, D.

    1999-01-01

    The evaluation of scientific-analysis computer programs for year-2000 compliance is part of AECL' s year-2000 (Y2K) initiative, which addresses both the infrastructure systems at AECL and AECL's products and services. This paper describes the Y2K-compliance program for scientific-analysis computer codes. This program involves the integrated evaluation of the computer hardware, middleware, and third-party software in addition to the scientific codes developed in-house. The project involves several steps: the assessment of the scientific computer programs for Y2K compliance, performing any required corrective actions, porting the programs to Y2K-compliant platforms, and verification of the programs after porting. Some programs or program versions, deemed no longer required in the year 2000 and beyond, will be retired and archived. (author)

  2. The Y2K program for scientific-analysis computer programs at AECL

    International Nuclear Information System (INIS)

    Popovic, J.; Gaver, C.; Chapman, D.

    1999-01-01

    The evaluation of scientific analysis computer programs for year-2000 compliance is part of AECL's year-2000 (Y2K) initiative, which addresses both the infrastructure systems at AECL and AECL's products and services. This paper describes the Y2K-compliance program for scientific-analysis computer codes. This program involves the integrated evaluation of the computer hardware, middleware, and third-party software in addition to the scientific codes developed in-house. The project involves several steps: the assessment of the scientific computer programs for Y2K compliance, performing any required corrective actions, porting the programs to Y2K-compliant platforms, and verification of the programs after porting. Some programs or program versions, deemed no longer required in the year 2000 and beyond, will be retired and archived. (author)

  3. Advanced Computing for 21st Century Accelerator Science and Technology

    International Nuclear Information System (INIS)

    Dragt, Alex J.

    2004-01-01

    Dr. Dragt of the University of Maryland is one of the Institutional Principal Investigators for the SciDAC Accelerator Modeling Project Advanced Computing for 21st Century Accelerator Science and Technology whose principal investigators are Dr. Kwok Ko (Stanford Linear Accelerator Center) and Dr. Robert Ryne (Lawrence Berkeley National Laboratory). This report covers the activities of Dr. Dragt while at Berkeley during spring 2002 and at Maryland during fall 2003

  4. The 12-th INS scientific computational programs

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-06-01

    This issue is the collection of the paper on INS scientific computational programs. Separate abstracts were presented for 3 of the papers in this report. The remaining 5 were considered outside the subject scope of INIS. (J.P.N.)

  5. Educational NASA Computational and Scientific Studies (enCOMPASS)

    Science.gov (United States)

    Memarsadeghi, Nargess

    2013-01-01

    Educational NASA Computational and Scientific Studies (enCOMPASS) is an educational project of NASA Goddard Space Flight Center aimed at bridging the gap between computational objectives and needs of NASA's scientific research, missions, and projects, and academia's latest advances in applied mathematics and computer science. enCOMPASS achieves this goal via bidirectional collaboration and communication between NASA and academia. Using developed NASA Computational Case Studies in university computer science/engineering and applied mathematics classes is a way of addressing NASA's goals of contributing to the Science, Technology, Education, and Math (STEM) National Objective. The enCOMPASS Web site at http://encompass.gsfc.nasa.gov provides additional information. There are currently nine enCOMPASS case studies developed in areas of earth sciences, planetary sciences, and astrophysics. Some of these case studies have been published in AIP and IEEE's Computing in Science and Engineering magazines. A few university professors have used enCOMPASS case studies in their computational classes and contributed their findings to NASA scientists. In these case studies, after introducing the science area, the specific problem, and related NASA missions, students are first asked to solve a known problem using NASA data and past approaches used and often published in a scientific/research paper. Then, after learning about the NASA application and related computational tools and approaches for solving the proposed problem, students are given a harder problem as a challenge for them to research and develop solutions for. This project provides a model for NASA scientists and engineers on one side, and university students, faculty, and researchers in computer science and applied mathematics on the other side, to learn from each other's areas of work, computational needs and solutions, and the latest advances in research and development. This innovation takes NASA science and

  6. Scientific Computing and Apple's Intel Transition

    CERN Document Server

    CERN. Geneva

    2006-01-01

    Intel's published processor roadmap and how it may affect the future of personal and scientific computing About the speaker: Eric Albert is Senior Software Engineer in Apple's Core Technologies group. During Mac OS X's transition to Intel processors he has worked on almost every part of the operating system, from the OS kernel and compiler tools to appli...

  7. Advanced scientific computational methods and their applications of nuclear technologies. (1) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (1)

    International Nuclear Information System (INIS)

    Oka, Yoshiaki; Okuda, Hiroshi

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the first issue showing their overview and introduction of continuum simulation methods. Finite element method as their applications is also reviewed. (T. Tanaka)

  8. Scientific Computing Kernels on the Cell Processor

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  9. Present SLAC accelerator computer control system features

    International Nuclear Information System (INIS)

    Davidson, V.; Johnson, R.

    1981-02-01

    The current functional organization and state of software development of the computer control system of the Stanford Linear Accelerator is described. Included is a discussion of the distribution of functions throughout the system, the local controller features, and currently implemented features of the touch panel portion of the system. The functional use of our triplex of PDP11-34 computers sharing common memory is described. Also included is a description of the use of pseudopanel tables as data tables for closed loop control functions

  10. Learning SciPy for numerical and scientific computing

    CERN Document Server

    Silva

    2013-01-01

    A step-by-step practical tutorial with plenty of examples on research-based problems from various areas of science, that prove how simple, yet effective, it is to provide solutions based on SciPy. This book is targeted at anyone with basic knowledge of Python, a somewhat advanced command of mathematics/physics, and an interest in engineering or scientific applications---this is broadly what we refer to as scientific computing.This book will be of critical importance to programmers and scientists who have basic Python knowledge and would like to be able to do scientific and numerical computatio

  11. Scientific Computing in the CH Programming Language

    Directory of Open Access Journals (Sweden)

    Harry H. Cheng

    1993-01-01

    Full Text Available We have developed a general-purpose block-structured interpretive programming Ianguage. The syntax and semantics of this language called CH are similar to C. CH retains most features of C from the scientific computing point of view. In this paper, the extension of C to CH for numerical computation of real numbers will be described. Metanumbers of −0.0, 0.0, Inf, −Inf, and NaN are introduced in CH. Through these metanumbers, the power of the IEEE 754 arithmetic standard is easily available to the programmer. These metanumbers are extended to commonly used mathematical functions in the spirit of the IEEE 754 standard and ANSI C. The definitions for manipulation of these metanumbers in I/O; arithmetic, relational, and logic operations; and built-in polymorphic mathematical functions are defined. The capabilities of bitwise, assignment, address and indirection, increment and decrement, as well as type conversion operations in ANSI C are extended in CH. In this paper, mainly new linguistic features of CH in comparison to C will be described. Example programs programmed in CH with metanumbers and polymorphic mathematical functions will demonstrate capabilities of CH in scientific computing.

  12. Application of High-performance Visual Analysis Methods to Laser Wakefield Particle Acceleration Data

    International Nuclear Information System (INIS)

    Rubel, Oliver; Prabhat, Mr.; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes

    2008-01-01

    Our work combines and extends techniques from high-performance scientific data management and visualization to enable scientific researchers to gain insight from extremely large, complex, time-varying laser wakefield particle accelerator simulation data. We extend histogram-based parallel coordinates for use in visual information display as well as an interface for guiding and performing data mining operations, which are based upon multi-dimensional and temporal thresholding and data subsetting operations. To achieve very high performance on parallel computing platforms, we leverage FastBit, a state-of-the-art index/query technology, to accelerate data mining and multi-dimensional histogram computation. We show how these techniques are used in practice by scientific researchers to identify, visualize and analyze a particle beam in a large, time-varying dataset

  13. GPU acceleration of Dock6's Amber scoring computation.

    Science.gov (United States)

    Yang, Hailong; Zhou, Qiongqiong; Li, Bo; Wang, Yongjian; Luan, Zhongzhi; Qian, Depei; Li, Hanlu

    2010-01-01

    Dressing the problem of virtual screening is a long-term goal in the drug discovery field, which if properly solved, can significantly shorten new drugs' R&D cycle. The scoring functionality that evaluates the fitness of the docking result is one of the major challenges in virtual screening. In general, scoring functionality in docking requires a large amount of floating-point calculations, which usually takes several weeks or even months to be finished. This time-consuming procedure is unacceptable, especially when highly fatal and infectious virus arises such as SARS and H1N1, which forces the scoring task to be done in a limited time. This paper presents how to leverage the computational power of GPU to accelerate Dock6's (http://dock.compbio.ucsf.edu/DOCK_6/) Amber (J. Comput. Chem. 25: 1157-1174, 2004) scoring with NVIDIA CUDA (NVIDIA Corporation Technical Staff, Compute Unified Device Architecture - Programming Guide, NVIDIA Corporation, 2008) (Compute Unified Device Architecture) platform. We also discuss many factors that will greatly influence the performance after porting the Amber scoring to GPU, including thread management, data transfer, and divergence hidden. Our experiments show that the GPU-accelerated Amber scoring achieves a 6.5× speedup with respect to the original version running on AMD dual-core CPU for the same problem size. This acceleration makes the Amber scoring more competitive and efficient for large-scale virtual screening problems.

  14. Research initiatives for plug-and-play scientific computing

    International Nuclear Information System (INIS)

    McInnes, Lois Curfman; Dahlgren, Tamara; Nieplocha, Jarek; Bernholdt, David; Allan, Ben; Armstrong, Rob; Chavarria, Daniel; Elwasif, Wael; Gorton, Ian; Kenny, Joe; Krishan, Manoj; Malony, Allen; Norris, Boyana; Ray, Jaideep; Shende, Sameer

    2007-01-01

    This paper introduces three component technology initiatives within the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS) that address ever-increasing productivity challenges in creating, managing, and applying simulation software to scientific discovery. By leveraging the Common Component Architecture (CCA), a new component standard for high-performance scientific computing, these initiatives tackle difficulties at different but related levels in the development of component-based scientific software: (1) deploying applications on massively parallel and heterogeneous architectures, (2) investigating new approaches to the runtime enforcement of behavioral semantics, and (3) developing tools to facilitate dynamic composition, substitution, and reconfiguration of component implementations and parameters, so that application scientists can explore tradeoffs among factors such as accuracy, reliability, and performance

  15. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY17.

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pugmire, David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rogers, David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Childs, Hank [Univ. of Oregon, Eugene, OR (United States); Ma, Kwan-Liu [Univ. of California, Davis, CA (United States); Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States)

    2017-10-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  16. Computer networks in future accelerator control systems

    International Nuclear Information System (INIS)

    Dimmler, D.G.

    1977-03-01

    Some findings of a study concerning a computer based control and monitoring system for the proposed ISABELLE Intersecting Storage Accelerator are presented. Requirements for development and implementation of such a system are discussed. An architecture is proposed where the system components are partitioned along functional lines. Implementation of some conceptually significant components is reviewed

  17. Performance evaluation of scientific programs on advanced architecture computers

    International Nuclear Information System (INIS)

    Walker, D.W.; Messina, P.; Baille, C.F.

    1988-01-01

    Recently a number of advanced architecture machines have become commercially available. These new machines promise better cost-performance then traditional computers, and some of them have the potential of competing with current supercomputers, such as the Cray X/MP, in terms of maximum performance. This paper describes an on-going project to evaluate a broad range of advanced architecture computers using a number of complete scientific application programs. The computers to be evaluated include distributed- memory machines such as the NCUBE, INTEL and Caltech/JPL hypercubes, and the MEIKO computing surface, shared-memory, bus architecture machines such as the Sequent Balance and the Alliant, very long instruction word machines such as the Multiflow Trace 7/200 computer, traditional supercomputers such as the Cray X.MP and Cray-2, and SIMD machines such as the Connection Machine. Currently 11 application codes from a number of scientific disciplines have been selected, although it is not intended to run all codes on all machines. Results are presented for two of the codes (QCD and missile tracking), and future work is proposed

  18. ''Accelerators and Beams,'' multimedia computer-based training in accelerator physics

    International Nuclear Information System (INIS)

    Silbar, R. R.; Browman, A. A.; Mead, W. C.; Williams, R. A.

    1999-01-01

    We are developing a set of computer-based tutorials on accelerators and charged-particle beams under an SBIR grant from the DOE. These self-paced, interactive tutorials, available for Macintosh and Windows platforms, use multimedia techniques to enhance the user's rate of learning and length of retention of the material. They integrate interactive ''On-Screen Laboratories,'' hypertext, line drawings, photographs, two- and three-dimensional animations, video, and sound. They target a broad audience, from undergraduates or technicians to professionals. Presently, three modules have been published (Vectors, Forces, and Motion), a fourth (Dipole Magnets) has been submitted for review, and three more exist in prototype form (Quadrupoles, Matrix Transport, and Properties of Charged-Particle Beams). Participants in the poster session will have the opportunity to try out these modules on a laptop computer

  19. Computational Materials Science and Chemistry: Accelerating Discovery and Innovation through Simulation-Based Engineering and Science

    Energy Technology Data Exchange (ETDEWEB)

    Crabtree, George [Argonne National Lab. (ANL), Argonne, IL (United States); Glotzer, Sharon [University of Michigan; McCurdy, Bill [University of California Davis; Roberto, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2010-07-26

    abating, has enabled the development of computer simulations and models of unprecedented fidelity. We are at the threshold of a new era where the integrated synthesis, characterization, and modeling of complex materials and chemical processes will transform our ability to understand and design new materials and chemistries with predictive power. In turn, this predictive capability will transform technological innovation by accelerating the development and deployment of new materials and processes in products and manufacturing. Harnessing the potential of computational science and engineering for the discovery and development of materials and chemical processes is essential to maintaining leadership in these foundational fields that underpin energy technologies and industrial competitiveness. Capitalizing on the opportunities presented by simulation-based engineering and science in materials and chemistry will require an integration of experimental capabilities with theoretical and computational modeling; the development of a robust and sustainable infrastructure to support the development and deployment of advanced computational models; and the assembly of a community of scientists and engineers to implement this integration and infrastructure. This community must extend to industry, where incorporating predictive materials science and chemistry into design tools can accelerate the product development cycle and drive economic competitiveness. The confluence of new theories, new materials synthesis capabilities, and new computer platforms has created an unprecedented opportunity to implement a "materials-by-design" paradigm with wide-ranging benefits in technological innovation and scientific discovery. The Workshop on Computational Materials Science and Chemistry for Innovation was convened in Bethesda, Maryland, on July 26-27, 2010. Sponsored by the Department of Energy (DOE) Offices of Advanced Scientific Computing Research and Basic Energy Sciences, the workshop

  20. Multi-GPU Jacobian accelerated computing for soft-field tomography

    International Nuclear Information System (INIS)

    Borsic, A; Attardo, E A; Halter, R J

    2012-01-01

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use finite element models (FEMs) to represent the volume of interest and solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are 3D. Although the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in electrical impedance tomography (EIT) applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15–20 min with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Furthermore, providing high-speed reconstructions is essential for some promising clinical application of EIT. For 3D problems, 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In this work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with the use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times

  1. Multi-GPU Jacobian accelerated computing for soft-field tomography.

    Science.gov (United States)

    Borsic, A; Attardo, E A; Halter, R J

    2012-10-01

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use finite element models (FEMs) to represent the volume of interest and solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are 3D. Although the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in electrical impedance tomography (EIT) applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15-20 min with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Furthermore, providing high-speed reconstructions is essential for some promising clinical application of EIT. For 3D problems, 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In this work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with the use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20

  2. Multidimensional Environmental Data Resource Brokering on Computational Grids and Scientific Clouds

    Science.gov (United States)

    Montella, Raffaele; Giunta, Giulio; Laccetti, Giuliano

    Grid computing has widely evolved over the past years, and its capabilities have found their way even into business products and are no longer relegated to scientific applications. Today, grid computing technology is not restricted to a set of specific grid open source or industrial products, but rather it is comprised of a set of capabilities virtually within any kind of software to create shared and highly collaborative production environments. These environments are focused on computational (workload) capabilities and the integration of information (data) into those computational capabilities. An active grid computing application field is the fully virtualization of scientific instruments in order to increase their availability and decrease operational and maintaining costs. Computational and information grids allow to manage real-world objects in a service-oriented way using industrial world-spread standards.

  3. Computer network access to scientific information systems for minority universities

    Science.gov (United States)

    Thomas, Valerie L.; Wakim, Nagi T.

    1993-08-01

    The evolution of computer networking technology has lead to the establishment of a massive networking infrastructure which interconnects various types of computing resources at many government, academic, and corporate institutions. A large segment of this infrastructure has been developed to facilitate information exchange and resource sharing within the scientific community. The National Aeronautics and Space Administration (NASA) supports both the development and the application of computer networks which provide its community with access to many valuable multi-disciplinary scientific information systems and on-line databases. Recognizing the need to extend the benefits of this advanced networking technology to the under-represented community, the National Space Science Data Center (NSSDC) in the Space Data and Computing Division at the Goddard Space Flight Center has developed the Minority University-Space Interdisciplinary Network (MU-SPIN) Program: a major networking and education initiative for Historically Black Colleges and Universities (HBCUs) and Minority Universities (MUs). In this paper, we will briefly explain the various components of the MU-SPIN Program while highlighting how, by providing access to scientific information systems and on-line data, it promotes a higher level of collaboration among faculty and students and NASA scientists.

  4. Computer codes used in particle accelerator design: First edition

    International Nuclear Information System (INIS)

    1987-01-01

    This paper contains a listing of more than 150 programs that have been used in the design and analysis of accelerators. Given on each citation are person to contact, classification of the computer code, publications describing the code, computer and language runned on, and a short description of the code. Codes are indexed by subject, person to contact, and code acronym

  5. JINR CLOUD SERVICE FOR SCIENTIFIC AND ENGINEERING COMPUTATIONS

    Directory of Open Access Journals (Sweden)

    Nikita A. Balashov

    2018-03-01

    Full Text Available Pretty often small research scientific groups do not have access to powerful enough computational resources required for their research work to be productive. Global computational infrastructures used by large scientific collaborations can be challenging for small research teams because of bureaucracy overhead as well as usage complexity of underlying tools. Some researchers buy a set of powerful servers to cover their own needs in computational resources. A drawback of such approach is a necessity to take care about proper hosting environment for these hardware and maintenance which requires a certain level of expertise. Moreover a lot of time such resources may be underutilized because а researcher needs to spend a certain amount of time to prepare computations and to analyze results as well as he doesn’t always need all resources of modern multi-core CPUs servers. The JINR cloud team developed a service which provides an access for scientists of small research groups from JINR and its Member State organizations to computational resources via problem-oriented (i.e. application-specific web-interface. It allows a scientist to focus on his research domain by interacting with the service in a convenient way via browser and abstracting away from underlying infrastructure as well as its maintenance. A user just sets a required values for his job via web-interface and specify a location for uploading a result. The computational workloads are done on the virtual machines deployed in the JINR cloud infrastructure.

  6. Ontology-Driven Discovery of Scientific Computational Entities

    Science.gov (United States)

    Brazier, Pearl W.

    2010-01-01

    Many geoscientists use modern computational resources, such as software applications, Web services, scientific workflows and datasets that are readily available on the Internet, to support their research and many common tasks. These resources are often shared via human contact and sometimes stored in data portals; however, they are not necessarily…

  7. Initial explorations of ARM processors for scientific computing

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Muzaffar, Shahzad

    2014-01-01

    Power efficiency is becoming an ever more important metric for both high performance and high throughput computing. Over the course of next decade it is expected that flops/watt will be a major driver for the evolution of computer architecture. Servers with large numbers of ARM processors, already ubiquitous in mobile computing, are a promising alternative to traditional x86-64 computing. We present the results of our initial investigations into the use of ARM processors for scientific computing applications. In particular we report the results from our work with a current generation ARMv7 development board to explore ARM-specific issues regarding the software development environment, operating system, performance benchmarks and issues for porting High Energy Physics software

  8. Accelerators and Beams, multimedia computer-based training in accelerator physics

    International Nuclear Information System (INIS)

    Silbar, R.R.; Browman, A.A.; Mead, W.C.; Williams, R.A.

    1999-01-01

    We are developing a set of computer-based tutorials on accelerators and charged-particle beams under an SBIR grant from the DOE. These self-paced, interactive tutorials, available for Macintosh and Windows platforms, use multimedia techniques to enhance the user close-quote s rate of learning and length of retention of the material. They integrate interactive On-Screen Laboratories, hypertext, line drawings, photographs, two- and three-dimensional animations, video, and sound. They target a broad audience, from undergraduates or technicians to professionals. Presently, three modules have been published (Vectors, Forces, and Motion), a fourth (Dipole Magnets) has been submitted for review, and three more exist in prototype form (Quadrupoles, Matrix Transport, and Properties of Charged-Particle Beams). Participants in the poster session will have the opportunity to try out these modules on a laptop computer. copyright 1999 American Institute of Physics

  9. Computing requirements for S.S.C. accelerator design and studies

    International Nuclear Information System (INIS)

    Dragt, A.; Talman, R.; Siemann, R.; Dell, G.F.; Leemann, B.; Leemann, C.; Nauenberg, U.; Peggs, S.; Douglas, D.

    1984-01-01

    We estimate the computational hardware resources that will be required for accelerator physics studies during the design of the Superconducting SuperCollider. It is found that both Class IV and Class VI facilities (1) will be necessary. We describe a user environment for these facilities that is desirable within the context of accelerator studies. An acquisition scenario for these facilities is presented

  10. Python for Scientific Computing Education: Modeling of Queueing Systems

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2014-01-01

    Full Text Available In this paper, we present the methodology for the introduction to scientific computing based on model-centered learning. We propose multiphase queueing systems as a basis for learning objects. We use Python and parallel programming for implementing the models and present the computer code and results of stochastic simulations.

  11. Personal computer control system for small size tandem accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Takayama, Hiroshi; Kawano, Kazuhiro; Shinozaki, Masataka [Nissin - High Voltage Co. Ltd., Kyoto (Japan)

    1996-12-01

    As the analysis apparatus using tandem accelerator has a lot of control parameter, numbers of control parts set on control panel are so many to make the panel more complex and its operativity worse. In order to improve these faults, development and design of a control system using personal computer for the control panel mainly constituted by conventional hardware parts were tried. Their predominant characteristics are shown as follows: (1) To make the control panel construction simpler and more compact, because the hardware device on the panel surface becomes the smallest limit as required by using a personal computer for man-machine interface. (2) To make control speed more rapid, because sequence control is closed within each block by driving accelerator system to each block and installing local station of the sequencer network at each block. (3) To make expandability larger, because of few improvement of the present hardware by interrupting the sequencer local station into the net and correcting image of the computer when increasing a new beamline. And, (4) to make control system cheaper, because of cheaper investment and easier programming by using the personal computer. (G.K.)

  12. A contribution to the computation of the impedance in acceleration resonators

    International Nuclear Information System (INIS)

    Liu, Cong

    2016-05-01

    This thesis is focusing on the numerical computation of the impedance in acceleration resonators and corresponding components. For this purpose, a dedicated solver based on the Finite Element Method (FEM) has been developed to compute the broadband impedance in accelerating components. In addition, various numerical approaches have been used to calculate the narrow-band impedance in superconducting radio frequency (RF) cavities. From that an overview of the calculated results as well as the comparisons between the applied numerical approaches is provided. During the design phase of superconducting RF accelerating cavities and components, a challenging and difficult task is the determination of the impedance inside the accelerators with the help of proper computer simulations. Impedance describes the electromagnetic interaction between the particle beam and the accelerators. It can affect the stability of the particle beam. For a superconducting RF accelerating cavity with waveguides (beam pipes and couplers), the narrow-band impedance, which is also called shunt impedance, corresponds to the eigenmodes of the cavity. It depends on the eigenfrequencies and its electromagnetic field distribution of the eigenmodes inside the cavity. On the other hand, the broadband impedance describes the interaction of the particle beam in the waveguides with its environment at arbitrary frequency and beam velocity. With the narrow-band and broadband impedance the detailed knowledges of the impedance for the accelerators can be given completely. In order to calculate the broadband longitudinal space charge impedance for acceleration components, a three-dimensional (3D) solver based on the FEM in frequency domain has been developed. To calculate the narrow-band impedance for superconducting RF cavities, we used various numerical approaches. Firstly, the eigenmode solver based on Finite Integration Technique (FIT) and a parallel real-valued FEM (CEM3Dr) eigenmode solver based on

  13. A contribution to the computation of the impedance in acceleration resonators

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Cong

    2016-05-15

    This thesis is focusing on the numerical computation of the impedance in acceleration resonators and corresponding components. For this purpose, a dedicated solver based on the Finite Element Method (FEM) has been developed to compute the broadband impedance in accelerating components. In addition, various numerical approaches have been used to calculate the narrow-band impedance in superconducting radio frequency (RF) cavities. From that an overview of the calculated results as well as the comparisons between the applied numerical approaches is provided. During the design phase of superconducting RF accelerating cavities and components, a challenging and difficult task is the determination of the impedance inside the accelerators with the help of proper computer simulations. Impedance describes the electromagnetic interaction between the particle beam and the accelerators. It can affect the stability of the particle beam. For a superconducting RF accelerating cavity with waveguides (beam pipes and couplers), the narrow-band impedance, which is also called shunt impedance, corresponds to the eigenmodes of the cavity. It depends on the eigenfrequencies and its electromagnetic field distribution of the eigenmodes inside the cavity. On the other hand, the broadband impedance describes the interaction of the particle beam in the waveguides with its environment at arbitrary frequency and beam velocity. With the narrow-band and broadband impedance the detailed knowledges of the impedance for the accelerators can be given completely. In order to calculate the broadband longitudinal space charge impedance for acceleration components, a three-dimensional (3D) solver based on the FEM in frequency domain has been developed. To calculate the narrow-band impedance for superconducting RF cavities, we used various numerical approaches. Firstly, the eigenmode solver based on Finite Integration Technique (FIT) and a parallel real-valued FEM (CEM3Dr) eigenmode solver based on

  14. Scientific computing with MATLAB and Octave

    CERN Document Server

    Quarteroni, Alfio; Gervasio, Paola

    2014-01-01

    This textbook is an introduction to Scientific Computing, in which several numerical methods for the computer-based solution of certain classes of mathematical problems are illustrated. The authors show how to compute the zeros, the extrema, and the integrals of continuous functions, solve linear systems, approximate functions using polynomials and construct accurate approximations for the solution of ordinary and partial differential equations. To make the format concrete and appealing, the programming environments Matlab and Octave are adopted as faithful companions. The book contains the solutions to several problems posed in exercises and examples, often originating from important applications. At the end of each chapter, a specific section is devoted to subjects which were not addressed in the book and contains bibliographical references for a more comprehensive treatment of the material. From the review: ".... This carefully written textbook, the third English edition, contains substantial new developme...

  15. Particle accelerators and scientific culture

    International Nuclear Information System (INIS)

    Amaldi, U.

    1979-01-01

    A historical review of fifty years of physics around particle accelerators, from the first nuclear reactions produced by beams of artificially accelerated particles to the large multinational projects now under discussion. The aim is to show how the description of natural phenomena has been shaped by advances in theoretical understanding, the development of new techniques, and the characters of men. Large use has been made of quotations from many of the scientists involved. (Auth.)

  16. Particle accelerators and scientific culture

    International Nuclear Information System (INIS)

    Amaldi, U.

    1979-01-01

    A historical review of fifty years of physics around particle accelerators, from the first nuclear reactions produced by beams of artificially accelerated particles to the large multinational projects now under discussion. The aim is to show how our description of natural phenomena has been shaped by advances in theoretical understanding, the development of new techniques, and the characters of men. Large use has been made of quotations from many of the scientists involved. (Auth.)

  17. RAPPORT: running scientific high-performance computing applications on the cloud.

    Science.gov (United States)

    Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt

    2013-01-28

    Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.

  18. Cloud Computing and Validated Learning for Accelerating Innovation in IoT

    Science.gov (United States)

    Suciu, George; Todoran, Gyorgy; Vulpe, Alexandru; Suciu, Victor; Bulca, Cristina; Cheveresan, Romulus

    2015-01-01

    Innovation in Internet of Things (IoT) requires more than just creation of technology and use of cloud computing or big data platforms. It requires accelerated commercialization or aptly called go-to-market processes. To successfully accelerate, companies need a new type of product development, the so-called validated learning process.…

  19. Acceleration of FDTD mode solver by high-performance computing techniques.

    Science.gov (United States)

    Han, Lin; Xi, Yanping; Huang, Wei-Ping

    2010-06-21

    A two-dimensional (2D) compact finite-difference time-domain (FDTD) mode solver is developed based on wave equation formalism in combination with the matrix pencil method (MPM). The method is validated for calculation of both real guided and complex leaky modes of typical optical waveguides against the bench-mark finite-difference (FD) eigen mode solver. By taking advantage of the inherent parallel nature of the FDTD algorithm, the mode solver is implemented on graphics processing units (GPUs) using the compute unified device architecture (CUDA). It is demonstrated that the high-performance computing technique leads to significant acceleration of the FDTD mode solver with more than 30 times improvement in computational efficiency in comparison with the conventional FDTD mode solver running on CPU of a standard desktop computer. The computational efficiency of the accelerated FDTD method is in the same order of magnitude of the standard finite-difference eigen mode solver and yet require much less memory (e.g., less than 10%). Therefore, the new method may serve as an efficient, accurate and robust tool for mode calculation of optical waveguides even when the conventional eigen value mode solvers are no longer applicable due to memory limitation.

  20. Technologies for Large Data Management in Scientific Computing

    CERN Document Server

    Pace, A

    2014-01-01

    In recent years, intense usage of computing has been the main strategy of investigations in several scientific research projects. The progress in computing technology has opened unprecedented opportunities for systematic collection of experimental data and the associated analysis that were considered impossible only few years ago. This paper focusses on the strategies in use: it reviews the various components that are necessary for an effective solution that ensures the storage, the long term preservation, and the worldwide distribution of large quantities of data that are necessary in a large scientific research project. The paper also mentions several examples of data management solutions used in High Energy Physics for the CERN Large Hadron Collider (LHC) experiments in Geneva, Switzerland which generate more than 30,000 terabytes of data every year that need to be preserved, analyzed, and made available to a community of several tenth of thousands scientists worldwide.

  1. Achievements in ISICs/SAPP collaborations for electromagnetic modeling of accelerators

    International Nuclear Information System (INIS)

    Lee Liequan; Ge Lixin; Li Zenghai; Ng, Cho; Schussman, Greg; Ko, Kwok

    2005-01-01

    SciDAC provides the unique opportunity and the resources for the Electromagnetic System Simulations (ESS) component of High Energy Physics (HEP)'s Accelerator Science and Technology (AST) project to work with researchers in the Integrated Software Infrastructure Centres (ISICs) and Scientific Application Pilot Program (SAPP) to overcome challenging barriers in computer science and applied mathematics in order to perform the large-scale simulations required to support the ongoing R and D efforts on accelerators across the Office of Science. This paper presents the resultant achievements made under SciDAC in important areas of computational science relevant to electromagnetic modelling of accelerators which include nonlinear eigensolvers, shape optimization, adaptive mesh refinement, parallel meshing, and visualization

  2. Effects of different computer typing speeds on acceleration and peak contact pressure of the fingertips during computer typing.

    Science.gov (United States)

    Yoo, Won-Gyu

    2015-01-01

    [Purpose] This study showed the effects of different computer typing speeds on acceleration and peak contact pressure of the fingertips during computer typing. [Subjects] Twenty-one male computer workers voluntarily consented to participate in this study. They consisted of 7 workers who could type 200-300 characteristics/minute, 7 workers who could type 300-400 characteristics/minute, and 7 workers who could type 400-500 chracteristics/minute. [Methods] This study was used to measure the acceleration and peak contact pressure of the fingertips for different typing speed groups using an accelerometer and CONFORMat system. [Results] The fingertip contact pressure was increased in the high typing speed group compared with the low and medium typing speed groups. The fingertip acceleration was increased in the high typing speed group compared with the low and medium typing speed groups. [Conclusion] The results of the present study indicate that a fast typing speed cause continuous pressure stress to be applied to the fingers, thereby creating pain in the fingers.

  3. Scholarly literature and the press: scientific impact and social perception of physics computing

    CERN Document Server

    Pia, Maria Grazia; Bell, Zane W; Dressendorfer, Paul V

    2014-01-01

    The broad coverage of the search for the Higgs boson in the mainstream media is a relative novelty for high energy physics (HEP) research, whose achievements have traditionally been limited to scholarly literature. This paper illustrates the results of a scientometric analysis of HEP computing in scientific literature, institutional media and the press, and a comparative overview of similar metrics concerning representative particle physics measurements. The picture emerging from these scientometric data documents the scientific impact and social perception of HEP computing. The results of this analysis suggest that improved communication of the scientific and social role of HEP computing would be beneficial to the high energy physics community.

  4. Final Report for 'Center for Technology for Advanced Scientific Component Software'

    International Nuclear Information System (INIS)

    Shasharina, Svetlana

    2010-01-01

    The goal of the Center for Technology for Advanced Scientific Component Software is to fundamentally changing the way scientific software is developed and used by bringing component-based software development technologies to high-performance scientific and engineering computing. The role of Tech-X work in TASCS project is to provide an outreach to accelerator physics and fusion applications by introducing TASCS tools into applications, testing tools in the applications and modifying the tools to be more usable.

  5. Scientific computing in electrical engineering SCEE 2010

    Energy Technology Data Exchange (ETDEWEB)

    Michielsen, Bastiaan [Office National d' Etudes et de Recherches Aerospatiales (ONERA), 31 - Toulouse (France); Poirier, Jean-Rene (eds.) [LAPLACE-ENSEEIHT, Toulouse (France)

    2012-07-01

    Selected from papers presented at the 8th Scientific Computation in Electrical Engineering conference in Toulouse in 2010, the contributions to this volume cover every angle of numerically modelling electronic and electrical systems, including computational electromagnetics, circuit theory and simulation and device modelling. On computational electromagnetics, the chapters examine cutting-edge material ranging from low-frequency electrical machine modelling problems to issues in high-frequency scattering. Regarding circuit theory and simulation, the book details the most advanced techniques for modelling networks with many thousands of components. Modelling devices at microscopic levels is covered by a number of fundamental mathematical physics papers, while numerous papers on model order reduction help engineers and systems designers to bring their modelling of industrial-scale systems within the reach of present-day computational power. Complementing these more specific papers, the volume also contains a selection of mathematical methods which can be used in any application domain. (orig.)

  6. Computer codes for designing proton linear accelerators

    International Nuclear Information System (INIS)

    Kato, Takao

    1992-01-01

    Computer codes for designing proton linear accelerators are discussed from the viewpoint of not only designing but also construction and operation of the linac. The codes are divided into three categories according to their purposes: 1) design code, 2) generation and simulation code, and 3) electric and magnetic fields calculation code. The role of each category is discussed on the basis of experience at KEK (the design of the 40-MeV proton linac and its construction and operation, and the design of the 1-GeV proton linac). We introduce our recent work relevant to three-dimensional calculation and supercomputer calculation: 1) tuning of MAFIA (three-dimensional electric and magnetic fields calculation code) for supercomputer, 2) examples of three-dimensional calculation of accelerating structures by MAFIA, 3) development of a beam transport code including space charge effects. (author)

  7. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY15 Q4.

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sewell, Christopher [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Childs, Hank [Univ. of Oregon, Eugene, OR (United States); Ma, Kwan-Liu [Univ. of California, Davis, CA (United States); Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States); Meredith, Jeremy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  8. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Mid-year report FY17 Q2

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pugmire, David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rogers, David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Childs, Hank [Univ. of Oregon, Eugene, OR (United States); Ma, Kwan-Liu [Univ. of California, Davis, CA (United States); Geveci, Berk [Kitware Inc., Clifton Park, NY (United States)

    2017-05-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  9. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem. Mid-year report FY16 Q2

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D.; Sewell, Christopher (LANL); Childs, Hank (U of Oregon); Ma, Kwan-Liu (UC Davis); Geveci, Berk (Kitware); Meredith, Jeremy (ORNL)

    2016-05-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  10. Emerging Nanophotonic Applications Explored with Advanced Scientific Parallel Computing

    Science.gov (United States)

    Meng, Xiang

    The domain of nanoscale optical science and technology is a combination of the classical world of electromagnetics and the quantum mechanical regime of atoms and molecules. Recent advancements in fabrication technology allows the optical structures to be scaled down to nanoscale size or even to the atomic level, which are far smaller than the wavelength they are designed for. These nanostructures can have unique, controllable, and tunable optical properties and their interactions with quantum materials can have important near-field and far-field optical response. Undoubtedly, these optical properties can have many important applications, ranging from the efficient and tunable light sources, detectors, filters, modulators, high-speed all-optical switches; to the next-generation classical and quantum computation, and biophotonic medical sensors. This emerging research of nanoscience, known as nanophotonics, is a highly interdisciplinary field requiring expertise in materials science, physics, electrical engineering, and scientific computing, modeling and simulation. It has also become an important research field for investigating the science and engineering of light-matter interactions that take place on wavelength and subwavelength scales where the nature of the nanostructured matter controls the interactions. In addition, the fast advancements in the computing capabilities, such as parallel computing, also become as a critical element for investigating advanced nanophotonic devices. This role has taken on even greater urgency with the scale-down of device dimensions, and the design for these devices require extensive memory and extremely long core hours. Thus distributed computing platforms associated with parallel computing are required for faster designs processes. Scientific parallel computing constructs mathematical models and quantitative analysis techniques, and uses the computing machines to analyze and solve otherwise intractable scientific challenges. In

  11. Scientific computing and algorithms in industrial simulations projects and products of Fraunhofer SCAI

    CERN Document Server

    Schüller, Anton; Schweitzer, Marc

    2017-01-01

    The contributions gathered here provide an overview of current research projects and selected software products of the Fraunhofer Institute for Algorithms and Scientific Computing SCAI. They show the wide range of challenges that scientific computing currently faces, the solutions it offers, and its important role in developing applications for industry. Given the exciting field of applied collaborative research and development it discusses, the book will appeal to scientists, practitioners, and students alike. The Fraunhofer Institute for Algorithms and Scientific Computing SCAI combines excellent research and application-oriented development to provide added value for our partners. SCAI develops numerical techniques, parallel algorithms and specialized software tools to support and optimize industrial simulations. Moreover, it implements custom software solutions for production and logistics, and offers calculations on high-performance computers. Its services and products are based on state-of-the-art metho...

  12. Domain analysis of computational science - Fifty years of a scientific computing group

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, M.

    2010-02-23

    I employed bibliometric- and historical-methods to study the domain of the Scientific Computing group at Brookhaven National Laboratory (BNL) for an extended period of fifty years, from 1958 to 2007. I noted and confirmed the growing emergence of interdisciplinarity within the group. I also identified a strong, consistent mathematics and physics orientation within it.

  13. New tools to aid in scientific computing and visualization

    International Nuclear Information System (INIS)

    Wallace, M.G.; Christian-Frear, T.L.

    1992-01-01

    In this paper, two computer programs are described which aid in the pre- and post-processing of computer generated data. CoMeT (Computational Mechanics Toolkit) is a customizable, interactive, graphical, menu-driven program that provides the analyst with a consistent user-friendly interface to analysis codes. Trans Vol (Transparent Volume Visualization) is a specialized tool for the scientific three-dimensional visualization of complex solids by the technique of volume rendering. Both tools are described in basic detail along with an application example concerning the simulation of contaminant migration from an underground nuclear repository

  14. Accelerating Science to Action: NGOs Catalyzing Scientific Research using Philanthropic/Corporate Funding

    Science.gov (United States)

    Hamburg, S.

    2017-12-01

    While government funding of scientific research has been the bedrock of scientific advances in the US, it is seldom quick or directly responsive to societal needs. If we are to effectively respond to the increasingly urgent needs for new science to address the environmental and social challenges faced by humanity and the environment we need to deploy new scientific models to augment government-centric approaches. The Environmental Defense Fund has developed an approach that accelerates the development and uptake of new science in pursuit of science-based policy to fill the gap while government research efforts are initiated. We utilized this approach in developing the data necessary to quantify methane emissions from the oil and gas supply chain. This effort was based on five key principles: studies led by an academic researchers; deployment of multiple methods whenever possible (e.g. top-down and bottom-up); all data made public (identity but not location masked when possible); external scientific review; results released in peer-reviewed scientific journals. The research to quantify methane emissions involved > 150 scientists from 40 institutions, resulting in 35 papers published over four years. In addition to the research community companies operating along the oil and gas value chain participated by providing access to sites/vehicles and funding for a portion of the academic research. The bulk of funding came from philanthropic sources. Overall the use of this alternative research/funding model allowed for the more rapid development of a robust body of policy-relevant knowledge that addressed an issue of high societal interest/value.

  15. Scholarly literature and the press: scientific impact and social perception of physics computing

    International Nuclear Information System (INIS)

    Pia, M G; Basaglia, T; Bell, Z W; Dressendorfer, P V

    2014-01-01

    The broad coverage of the search for the Higgs boson in the mainstream media is a relative novelty for high energy physics (HEP) research, whose achievements have traditionally been limited to scholarly literature. This paper illustrates the results of a scientometric analysis of HEP computing in scientific literature, institutional media and the press, and a comparative overview of similar metrics concerning representative particle physics measurements. The picture emerging from these scientometric data documents the relationship between the scientific impact and the social perception of HEP physics research versus that of HEP computing. The results of this analysis suggest that improved communication of the scientific and social role of HEP computing via press releases from the major HEP laboratories would be beneficial to the high energy physics community.

  16. Data-flow oriented visual programming libraries for scientific computing

    NARCIS (Netherlands)

    Maubach, J.M.L.; Drenth, W.D.; Sloot, P.M.A.

    2002-01-01

    The growing release of scientific computational software does not seem to aid the implementation of complex numerical algorithms. Released libraries lack a common standard interface with regard to for instance finite element, difference or volume discretizations. And, libraries written in standard

  17. National Energy Research Scientific Computing Center 2007 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Hules, John A.; Bashor, Jon; Wang, Ucilia; Yarris, Lynn; Preuss, Paul

    2008-10-23

    This report presents highlights of the research conducted on NERSC computers in a variety of scientific disciplines during the year 2007. It also reports on changes and upgrades to NERSC's systems and services aswell as activities of NERSC staff.

  18. FINAL REPORT DE-FG02-04ER41317 Advanced Computation and Chaotic Dynamics for Beams and Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Cary, John R [U. Colorado

    2014-09-08

    During the year ending in August 2013, we continued to investigate the potential of photonic crystal (PhC) materials for acceleration purposes. We worked to characterize acceleration ability of simple PhC accelerator structures, as well as to characterize PhC materials to determine whether current fabrication techniques can meet the needs of future accelerating structures. We have also continued to design and optimize PhC accelerator structures, with the ultimate goal of finding a new kind of accelerator structure that could offer significant advantages over current RF acceleration technology. This design and optimization of these requires high performance computation, and we continue to work on methods to make such computation faster and more efficient.

  19. Accelerating Neuroimage Registration through Parallel Computation of Similarity Metric.

    Directory of Open Access Journals (Sweden)

    Yun-Gang Luo

    Full Text Available Neuroimage registration is crucial for brain morphometric analysis and treatment efficacy evaluation. However, existing advanced registration algorithms such as FLIRT and ANTs are not efficient enough for clinical use. In this paper, a GPU implementation of FLIRT with the correlation ratio (CR as the similarity metric and a GPU accelerated correlation coefficient (CC calculation for the symmetric diffeomorphic registration of ANTs have been developed. The comparison with their corresponding original tools shows that our accelerated algorithms can greatly outperform the original algorithm in terms of computational efficiency. This paper demonstrates the great potential of applying these registration tools in clinical applications.

  20. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Report: Exascale Computing Initiative Review

    Energy Technology Data Exchange (ETDEWEB)

    Reed, Daniel [University of Iowa; Berzins, Martin [University of Utah; Pennington, Robert; Sarkar, Vivek [Rice University; Taylor, Valerie [Texas A& M University

    2015-08-01

    On November 19, 2014, the Advanced Scientific Computing Advisory Committee (ASCAC) was charged with reviewing the Department of Energy’s conceptual design for the Exascale Computing Initiative (ECI). In particular, this included assessing whether there are significant gaps in the ECI plan or areas that need to be given priority or extra management attention. Given the breadth and depth of previous reviews of the technical challenges inherent in exascale system design and deployment, the subcommittee focused its assessment on organizational and management issues, considering technical issues only as they informed organizational or management priorities and structures. This report presents the observations and recommendations of the subcommittee.

  1. Scientific computing vol II - eigenvalues and optimization

    CERN Document Server

    Trangenstein, John A

    2017-01-01

    This is the second of three volumes providing a comprehensive presentation of the fundamentals of scientific computing. This volume discusses more advanced topics than volume one, and is largely not a prerequisite for volume three. This book and its companions show how to determine the quality of computational results, and how to measure the relative efficiency of competing methods. Readers learn how to determine the maximum attainable accuracy of algorithms, and how to select the best method for computing problems. This book also discusses programming in several languages, including C++, Fortran and MATLAB. There are 49 examples, 110 exercises, 66 algorithms, 24 interactive JavaScript programs, 77 references to software programs and 1 case study. Topics are introduced with goals, literature references and links to public software. There are descriptions of the current algorithms in LAPACK, GSLIB and MATLAB. This book could be used for a second course in numerical methods, for either upper level undergraduate...

  2. Scientific computing vol III - approximation and integration

    CERN Document Server

    Trangenstein, John A

    2017-01-01

    This is the third of three volumes providing a comprehensive presentation of the fundamentals of scientific computing. This volume discusses topics that depend more on calculus than linear algebra, in order to prepare the reader for solving differential equations. This book and its companions show how to determine the quality of computational results, and how to measure the relative efficiency of competing methods. Readers learn how to determine the maximum attainable accuracy of algorithms, and how to select the best method for computing problems. This book also discusses programming in several languages, including C++, Fortran and MATLAB. There are 90 examples, 200 exercises, 36 algorithms, 40 interactive JavaScript programs, 91 references to software programs and 1 case study. Topics are introduced with goals, literature references and links to public software. There are descriptions of the current algorithms in GSLIB and MATLAB. This book could be used for a second course in numerical methods, for either ...

  3. UNEDF: Advanced Scientific Computing Collaboration Transforms the Low-Energy Nuclear Many-Body Problem

    International Nuclear Information System (INIS)

    Nam, H; Stoitsov, M; Nazarewicz, W; Hagen, G; Kortelainen, M; Pei, J C; Bulgac, A; Maris, P; Vary, J P; Roche, K J; Schunck, N; Thompson, I; Wild, S M

    2012-01-01

    The demands of cutting-edge science are driving the need for larger and faster computing resources. With the rapidly growing scale of computing systems and the prospect of technologically disruptive architectures to meet these needs, scientists face the challenge of effectively using complex computational resources to advance scientific discovery. Multi-disciplinary collaborating networks of researchers with diverse scientific backgrounds are needed to address these complex challenges. The UNEDF SciDAC collaboration of nuclear theorists, applied mathematicians, and computer scientists is developing a comprehensive description of nuclei and their reactions that delivers maximum predictive power with quantified uncertainties. This paper describes UNEDF and identifies attributes that classify it as a successful computational collaboration. We illustrate significant milestones accomplished by UNEDF through integrative solutions using the most reliable theoretical approaches, most advanced algorithms, and leadership-class computational resources.

  4. Theoretical and technological building blocks for an innovation accelerator

    Science.gov (United States)

    van Harmelen, F.; Kampis, G.; Börner, K.; van den Besselaar, P.; Schultes, E.; Goble, C.; Groth, P.; Mons, B.; Anderson, S.; Decker, S.; Hayes, C.; Buecheler, T.; Helbing, D.

    2012-11-01

    Modern science is a main driver of technological innovation. The efficiency of the scientific system is of key importance to ensure the competitiveness of a nation or region. However, the scientific system that we use today was devised centuries ago and is inadequate for our current ICT-based society: the peer review system encourages conservatism, journal publications are monolithic and slow, data is often not available to other scientists, and the independent validation of results is limited. The resulting scientific process is hence slow and sloppy. Building on the Innovation Accelerator paper by Helbing and Balietti [1], this paper takes the initial global vision and reviews the theoretical and technological building blocks that can be used for implementing an innovation (in first place: science) accelerator platform driven by re-imagining the science system. The envisioned platform would rest on four pillars: (i) Redesign the incentive scheme to reduce behavior such as conservatism, herding and hyping; (ii) Advance scientific publications by breaking up the monolithic paper unit and introducing other building blocks such as data, tools, experiment workflows, resources; (iii) Use machine readable semantics for publications, debate structures, provenance etc. in order to include the computer as a partner in the scientific process, and (iv) Build an online platform for collaboration, including a network of trust and reputation among the different types of stakeholders in the scientific system: scientists, educators, funding agencies, policy makers, students and industrial innovators among others. Any such improvements to the scientific system must support the entire scientific process (unlike current tools that chop up the scientific process into disconnected pieces), must facilitate and encourage collaboration and interdisciplinarity (again unlike current tools), must facilitate the inclusion of intelligent computing in the scientific process, must facilitate

  5. Scientific Grand Challenges: Crosscutting Technologies for Computing at the Exascale - February 2-4, 2010, Washington, D.C.

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2011-02-06

    The goal of the "Scientific Grand Challenges - Crosscutting Technologies for Computing at the Exascale" workshop in February 2010, jointly sponsored by the U.S. Department of Energy’s Office of Advanced Scientific Computing Research and the National Nuclear Security Administration, was to identify the elements of a research and development agenda that will address these challenges and create a comprehensive exascale computing environment. This exascale computing environment will enable the science applications identified in the eight previously held Scientific Grand Challenges Workshop Series.

  6. FPGA Compute Acceleration for High-Throughput Data Processing in High-Energy Physics Experiments

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    The upgrades of the four large experiments of the LHC at CERN in the coming years will result in a huge increase of data bandwidth for each experiment which needs to be processed very efficiently. For example the LHCb experiment will upgrade its detector 2019/2020 to a 'triggerless' readout scheme, where all of the readout electronics and several sub-detector parts will be replaced. The new readout electronics will be able to readout the detector at 40MHz. This increases the data bandwidth from the detector down to the event filter farm to 40TBit/s, which must be processed to select the interesting proton-proton collisions for later storage. The architecture of such a computing farm, which can process this amount of data as efficiently as possible, is a challenging task and several compute accelerator technologies are being considered.    In the high performance computing sector more and more FPGA compute accelerators are being used to improve the compute performance and reduce the...

  7. Basic research in the East and West: a comparison of the scientific performance of high-energy physics accelerators

    International Nuclear Information System (INIS)

    Irvine, J.; Martin, B.R.

    1985-01-01

    This paper presents the results of a study comparing the past scientific performance of high-energy physics accelerators in the Eastern bloc with that of their main Western counterparts. Output-evaluation indicators are used. After carefully examining the extent to which the output indicators used may be biased against science in the Eastern bloc, various conclusions are drawn about the relative contributions to science made by these accelerators. Where significant differences in performance are apparent, an attempt is made to identify the main factors responsible. (author)

  8. High-integrity software, computation and the scientific method

    International Nuclear Information System (INIS)

    Hatton, L.

    2012-01-01

    Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. With the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. In this paper, some of the problems with computation, for example the long-term unquantifiable presence of undiscovered defect, problems with programming languages and process issues will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within computer science itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. (author)

  9. International Symposium on Scientific Computing, Computer Arithmetic and Validated Numerics

    CERN Document Server

    DEVELOPMENTS IN RELIABLE COMPUTING

    1999-01-01

    The SCAN conference, the International Symposium on Scientific Com­ puting, Computer Arithmetic and Validated Numerics, takes place bian­ nually under the joint auspices of GAMM (Gesellschaft fiir Angewandte Mathematik und Mechanik) and IMACS (International Association for Mathematics and Computers in Simulation). SCAN-98 attracted more than 100 participants from 21 countries all over the world. During the four days from September 22 to 25, nine highlighted, plenary lectures and over 70 contributed talks were given. These figures indicate a large participation, which was partly caused by the attraction of the organizing country, Hungary, but also the effec­ tive support system have contributed to the success. The conference was substantially supported by the Hungarian Research Fund OTKA, GAMM, the National Technology Development Board OMFB and by the J6zsef Attila University. Due to this funding, it was possible to subsidize the participation of over 20 scientists, mainly from Eastern European countries. I...

  10. Computer simulations and the changing face of scientific experimentation

    CERN Document Server

    Duran, Juan M

    2013-01-01

    Computer simulations have become a central tool for scientific practice. Their use has replaced, in many cases, standard experimental procedures. This goes without mentioning cases where the target system is empirical but there are no techniques for direct manipulation of the system, such as astronomical observation. To these cases, computer simulations have proved to be of central importance. The question about their use and implementation, therefore, is not only a technical one but represents a challenge for the humanities as well. In this volume, scientists, historians, and philosophers joi

  11. PARA'04, State-of-the-art in scientific computing

    DEFF Research Database (Denmark)

    Madsen, Kaj; Wasniewski, Jerzy

    This meeting in the series, the PARA'04 Workshop with the title ``State of the Art in Scientific Computing'', was held in Lyngby, Denmark, June 20-23, 2004. The PARA'04 Workshop was organized by Jack Dongarra from the University of Tennessee and Oak Ridge National Laboratory, and Kaj Madsen and J...

  12. Elastic Scheduling of Scientific Workflows under Deadline Constraints in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    Nazia Anwar

    2018-01-01

    Full Text Available Scientific workflow applications are collections of several structured activities and fine-grained computational tasks. Scientific workflow scheduling in cloud computing is a challenging research topic due to its distinctive features. In cloud environments, it has become critical to perform efficient task scheduling resulting in reduced scheduling overhead, minimized cost and maximized resource utilization while still meeting the user-specified overall deadline. This paper proposes a strategy, Dynamic Scheduling of Bag of Tasks based workflows (DSB, for scheduling scientific workflows with the aim to minimize financial cost of leasing Virtual Machines (VMs under a user-defined deadline constraint. The proposed model groups the workflow into Bag of Tasks (BoTs based on data dependency and priority constraints and thereafter optimizes the allocation and scheduling of BoTs on elastic, heterogeneous and dynamically provisioned cloud resources called VMs in order to attain the proposed method’s objectives. The proposed approach considers pay-as-you-go Infrastructure as a Service (IaaS clouds having inherent features such as elasticity, abundance, heterogeneity and VM provisioning delays. A trace-based simulation using benchmark scientific workflows representing real world applications, demonstrates a significant reduction in workflow computation cost while the workflow deadline is met. The results validate that the proposed model produces better success rates to meet deadlines and cost efficiencies in comparison to adapted state-of-the-art algorithms for similar problems.

  13. Predictive Performance Tuning of OpenACC Accelerated Applications

    KAUST Repository

    Siddiqui, Shahzeb

    2014-05-04

    Graphics Processing Units (GPUs) are gradually becoming mainstream in supercomputing as their capabilities to significantly accelerate a large spectrum of scientific applications have been clearly identified and proven. Moreover, with the introduction of high level programming models such as OpenACC [1] and OpenMP 4.0 [2], these devices are becoming more accessible and practical to use by a larger scientific community. However, performance optimization of OpenACC accelerated applications usually requires an in-depth knowledge of the hardware and software specifications. We suggest a prediction-based performance tuning mechanism [3] to quickly tune OpenACC parameters for a given application to dynamically adapt to the execution environment on a given system. This approach is applied to a finite difference kernel to tune the OpenACC gang and vector clauses for mapping the compute kernels into the underlying accelerator architecture. Our experiments show a significant performance improvement against the default compiler parameters and a faster tuning by an order of magnitude compared to the brute force search tuning.

  14. ScalaLab and GroovyLab: Comparing Scala and Groovy for Scientific Computing

    Directory of Open Access Journals (Sweden)

    Stergios Papadimitriou

    2015-01-01

    Full Text Available ScalaLab and GroovyLab are both MATLAB-like environments for the Java Virtual Machine. ScalaLab is based on the Scala programming language and GroovyLab is based on the Groovy programming language. They present similar user interfaces and functionality to the user. They also share the same set of Java scientific libraries and of native code libraries. From the programmer's point of view though, they have significant differences. This paper compares some aspects of the two environments and highlights some of the strengths and weaknesses of Scala versus Groovy for scientific computing. The discussion also examines some aspects of the dilemma of using dynamic typing versus static typing for scientific programming. The performance of the Java platform is continuously improved at a fast pace. Today Java can effectively support demanding high-performance computing and scales well on multicore platforms. Thus, both systems can challenge the performance of the traditional C/C++/Fortran scientific code with an easier to use and more productive programming environment.

  15. Predicting future discoveries from current scientific literature.

    Science.gov (United States)

    Petrič, Ingrid; Cestnik, Bojan

    2014-01-01

    Knowledge discovery in biomedicine is a time-consuming process starting from the basic research, through preclinical testing, towards possible clinical applications. Crossing of conceptual boundaries is often needed for groundbreaking biomedical research that generates highly inventive discoveries. We demonstrate the ability of a creative literature mining method to advance valuable new discoveries based on rare ideas from existing literature. When emerging ideas from scientific literature are put together as fragments of knowledge in a systematic way, they may lead to original, sometimes surprising, research findings. If enough scientific evidence is already published for the association of such findings, they can be considered as scientific hypotheses. In this chapter, we describe a method for the computer-aided generation of such hypotheses based on the existing scientific literature. Our literature-based discovery of NF-kappaB with its possible connections to autism was recently approved by scientific community, which confirms the ability of our literature mining methodology to accelerate future discoveries based on rare ideas from existing literature.

  16. The Observation of Bahasa Indonesia Official Computer Terms Implementation in Scientific Publication

    Science.gov (United States)

    Gunawan, D.; Amalia, A.; Lydia, M. S.; Muthaqin, M. I.

    2018-03-01

    The government of the Republic of Indonesia had issued a regulation to substitute computer terms in foreign language that have been used earlier into official computer terms in Bahasa Indonesia. This regulation was stipulated in Presidential Decree No. 2 of 2001 concerning the introduction of official computer terms in Bahasa Indonesia (known as Senarai Padanan Istilah/SPI). After sixteen years, people of Indonesia, particularly for academics, should have implemented the official computer terms in their official publications. This observation is conducted to discover the implementation of official computer terms usage in scientific publications which are written in Bahasa Indonesia. The data source used in this observation are the publications by the academics, particularly in computer science field. The method used in the observation is divided into four stages. The first stage is metadata harvesting by using Open Archive Initiative - Protocol for Metadata Harvesting (OAI-PMH). Second, converting the harvested document (in pdf format) to plain text. The third stage is text-preprocessing as the preparation of string matching. Then the final stage is searching the official computer terms based on 629 SPI terms by using Boyer-Moore algorithm. We observed that there are 240,781 foreign computer terms in 1,156 scientific publications from six universities. This result shows that the foreign computer terms are still widely used by the academics.

  17. From Mars to Minerva: The origins of scientific computing in the AEC labs

    Energy Technology Data Exchange (ETDEWEB)

    Seidel, R.W. [ERA Land Grant Professor of the History of Technology]|[Charles Babbage Institute, University of Minnesota, Minneapolis, Minnesota (United States)

    1996-10-01

    Although the AEC laboratories are renowned for the development of nuclear weapons, their largess in promoting scientific computing also had a profound effect on scientific and technological development in the second half of the 20th century. {copyright} {ital 1996 American Institute of Physics.}

  18. OPENING REMARKS: Scientific Discovery through Advanced Computing

    Science.gov (United States)

    Strayer, Michael

    2006-01-01

    Good morning. Welcome to SciDAC 2006 and Denver. I share greetings from the new Undersecretary for Energy, Ray Orbach. Five years ago SciDAC was launched as an experiment in computational science. The goal was to form partnerships among science applications, computer scientists, and applied mathematicians to take advantage of the potential of emerging terascale computers. This experiment has been a resounding success. SciDAC has emerged as a powerful concept for addressing some of the biggest challenges facing our world. As significant as these successes were, I believe there is also significance in the teams that achieved them. In addition to their scientific aims these teams have advanced the overall field of computational science and set the stage for even larger accomplishments as we look ahead to SciDAC-2. I am sure that many of you are expecting to hear about the results of our current solicitation for SciDAC-2. I’m afraid we are not quite ready to make that announcement. Decisions are still being made and we will announce the results later this summer. Nearly 250 unique proposals were received and evaluated, involving literally thousands of researchers, postdocs, and students. These collectively requested more than five times our expected budget. This response is a testament to the success of SciDAC in the community. In SciDAC-2 our budget has been increased to about 70 million for FY 2007 and our partnerships have expanded to include the Environment and National Security missions of the Department. The National Science Foundation has also joined as a partner. These new partnerships are expected to expand the application space of SciDAC, and broaden the impact and visibility of the program. We have, with our recent solicitation, expanded to turbulence, computational biology, and groundwater reactive modeling and simulation. We are currently talking with the Department’s applied energy programs about risk assessment, optimization of complex systems - such

  19. X-BAND LINEAR COLLIDER R and D IN ACCELERATING STRUCTURES THROUGH ADVANCED COMPUTING

    International Nuclear Information System (INIS)

    Li, Z

    2004-01-01

    This paper describes a major computational effort that addresses key design issues in the high gradient accelerating structures for the proposed X-band linear collider, GLC/NLC. Supported by the US DOE's Accelerator Simulation Project, SLAC is developing a suite of parallel electromagnetic codes based on unstructured grids for modeling RF structures with higher accuracy and on a scale previously not possible. The new simulation tools have played an important role in the R and D of X-Band accelerating structures, in cell design, wakefield analysis and dark current studies

  20. Large-scale computation at PSI scientific achievements and future requirements

    International Nuclear Information System (INIS)

    Adelmann, A.; Markushin, V.

    2008-11-01

    Computational modelling and simulation are among the disciplines that have seen the most dramatic growth in capabilities in the 2Oth Century. Within the past two decades, scientific computing has become an important contributor to all scientific research programs. Computational modelling and simulation are particularly indispensable for solving research problems that are unsolvable by traditional theoretical and experimental approaches, hazardous to study, or time consuming or expensive to solve by traditional means. Many such research areas are found in PSI's research portfolio. Advances in computing technologies (including hardware and software) during the past decade have set the stage for a major step forward in modelling and simulation. We have now arrived at a situation where we have a number of otherwise unsolvable problems, where simulations are as complex as the systems under study. In 2008 the High-Performance Computing (HPC) community entered the petascale area with the heterogeneous Opteron/Cell machine, called Road Runner built by IBM for the Los Alamos National Laboratory. We are on the brink of a time where the availability of many hundreds of thousands of cores will open up new challenging possibilities in physics, algorithms (numerical mathematics) and computer science. However, to deliver on this promise, it is not enough to provide 'peak' performance in terms of peta-flops, the maximum theoretical speed a computer can attain. Most important, this must be translated into corresponding increase in the capabilities of scientific codes. This is a daunting problem that can only be solved by increasing investment in hardware, in the accompanying system software that enables the reliable use of high-end computers, in scientific competence i.e. the mathematical (parallel) algorithms that are the basis of the codes, and education. In the case of Switzerland, the white paper 'Swiss National Strategic Plan for High Performance Computing and Networking

  1. Large-scale computation at PSI scientific achievements and future requirements

    Energy Technology Data Exchange (ETDEWEB)

    Adelmann, A.; Markushin, V

    2008-11-15

    Computational modelling and simulation are among the disciplines that have seen the most dramatic growth in capabilities in the 2Oth Century. Within the past two decades, scientific computing has become an important contributor to all scientific research programs. Computational modelling and simulation are particularly indispensable for solving research problems that are unsolvable by traditional theoretical and experimental approaches, hazardous to study, or time consuming or expensive to solve by traditional means. Many such research areas are found in PSI's research portfolio. Advances in computing technologies (including hardware and software) during the past decade have set the stage for a major step forward in modelling and simulation. We have now arrived at a situation where we have a number of otherwise unsolvable problems, where simulations are as complex as the systems under study. In 2008 the High-Performance Computing (HPC) community entered the petascale area with the heterogeneous Opteron/Cell machine, called Road Runner built by IBM for the Los Alamos National Laboratory. We are on the brink of a time where the availability of many hundreds of thousands of cores will open up new challenging possibilities in physics, algorithms (numerical mathematics) and computer science. However, to deliver on this promise, it is not enough to provide 'peak' performance in terms of peta-flops, the maximum theoretical speed a computer can attain. Most important, this must be translated into corresponding increase in the capabilities of scientific codes. This is a daunting problem that can only be solved by increasing investment in hardware, in the accompanying system software that enables the reliable use of high-end computers, in scientific competence i.e. the mathematical (parallel) algorithms that are the basis of the codes, and education. In the case of Switzerland, the white paper 'Swiss National Strategic Plan for High Performance Computing

  2. Accelerating Development of EV Batteries Through Computer-Aided Engineering (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Pesaran, A.; Kim, G. H.; Smith, K.; Santhanagopalan, S.

    2012-12-01

    The Department of Energy's Vehicle Technology Program has launched the Computer-Aided Engineering for Automotive Batteries (CAEBAT) project to work with national labs, industry and software venders to develop sophisticated software. As coordinator, NREL has teamed with a number of companies to help improve and accelerate battery design and production. This presentation provides an overview of CAEBAT, including its predictive computer simulation of Li-ion batteries known as the Multi-Scale Multi-Dimensional (MSMD) model framework. MSMD's modular, flexible architecture connects the physics of battery charge/discharge processes, thermal control, safety and reliability in a computationally efficient manner. This allows independent development of submodels at the cell and pack levels.

  3. National facility for advanced computational science: A sustainable path to scientific discovery

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst; Kramer, William; Saphir, William; Shalf, John; Bailey, David; Oliker, Leonid; Banda, Michael; McCurdy, C. William; Hules, John; Canning, Andrew; Day, Marc; Colella, Philip; Serafini, David; Wehner, Michael; Nugent, Peter

    2004-04-02

    Lawrence Berkeley National Laboratory (Berkeley Lab) proposes to create a National Facility for Advanced Computational Science (NFACS) and to establish a new partnership between the American computer industry and a national consortium of laboratories, universities, and computing facilities. NFACS will provide leadership-class scientific computing capability to scientists and engineers nationwide, independent of their institutional affiliation or source of funding. This partnership will bring into existence a new class of computational capability in the United States that is optimal for science and will create a sustainable path towards petaflops performance.

  4. Technical Challenges and Scientific Payoffs of Muon Beam Accelerators for Particle Physics

    International Nuclear Information System (INIS)

    Zisman, Michael S.

    2007-01-01

    Historically, progress in particle physics has largely been determined by development of more capable particle accelerators. This trend continues today with the recent advent of high-luminosity electron-positron colliders at KEK and SLAC operating as 'B factories', the imminent commissioning of the Large Hadron Collider at CERN, and the worldwide development effort toward the International Linear Collider. Looking to the future, one of the most promising approaches is the development of muon-beam accelerators. Such machines have very high scientific potential, and would substantially advance the state-of-the-art in accelerator design. A 20-50 GeV muon storage ring could serve as a copious source of well-characterized electron neutrinos or antineutrinos (a Neutrino Factory), providing beams aimed at detectors located 3000-7500 km from the ring. Such long baseline experiments are expected to be able to observe and characterize the phenomenon of charge-conjugation-parity (CP) violation in the lepton sector, and thus provide an answer to one of the most fundamental questions in science, namely, why the matter-dominated universe in which we reside exists at all. By accelerating muons to even higher energies of several TeV, we can envision a Muon Collider. In contrast with composite particles like protons, muons are point particles. This means that the full collision energy is available to create new particles. A Muon Collider has roughly ten times the energy reach of a proton collider at the same collision energy, and has a much smaller footprint. Indeed, an energy frontier Muon Collider could fit on the site of an existing laboratory, such as Fermilab or BNL. The challenges of muon-beam accelerators are related to the facts that (1) muons are produced as a tertiary beam, with very large 6D phase space, and (2) muons are unstable, with a lifetime at rest of only 2 microseconds. How these challenges are accommodated in the accelerator design will be described. Both a

  5. Cray XT4: An Early Evaluation for Petascale Scientific Simulation

    International Nuclear Information System (INIS)

    Alam, Sadaf R.; Barrett, Richard F.; Fahey, Mark R.; Kuehn, Jeffery A.; Sankaran, Ramanan; Worley, Patrick H.; Larkin, Jeffrey M.

    2007-01-01

    The scientific simulation capabilities of next generation high-end computing technology will depend on striking a balance among memory, processor, I/O, and local and global network performance across the breadth of the scientific simulation space. The Cray XT4 combines commodity AMD dual core Opteron processor technology with the second generation of Cray's custom communication accelerator in a system design whose balance is claimed to be driven by the demands of scientific simulation. This paper presents an evaluation of the Cray XT4 using microbenchmarks to develop a controlled understanding of individual system components, providing the context for analyzing and comprehending the performance of several petascale-ready applications. Results gathered from several strategic application domains are compared with observations on the previous generation Cray XT3 and other high-end computing systems, demonstrating performance improvements across a wide variety of application benchmark problems.

  6. 3-D computations and measurements of accelerator magnets for the APS

    International Nuclear Information System (INIS)

    Turner, L.R.; Kim, S.H.; Kim, K.

    1993-01-01

    The Advanced Photon Source (APS), now under construction at Argonne National Laboratory (ANL), requires dipole, quadrupole, sextupole, and corrector magnets for each of its circular accelerator systems. Three-dimensional (3-D) field computations are needed to eliminate unwanted multipole fields from the ends of long quadrupole and dipole magnets and to guarantee that the flux levels in the poles of short magnets will not cause saturation. Measurements of the magnets show good agreement with the computations

  7. Topics in numerical partial differential equations and scientific computing

    CERN Document Server

    2016-01-01

    Numerical partial differential equations (PDEs) are an important part of numerical simulation, the third component of the modern methodology for science and engineering, besides the traditional theory and experiment. This volume contains papers that originated with the collaborative research of the teams that participated in the IMA Workshop for Women in Applied Mathematics: Numerical Partial Differential Equations and Scientific Computing in August 2014.

  8. Advanced Scientific Computing Research Exascale Requirements Review. An Office of Science review sponsored by Advanced Scientific Computing Research, September 27-29, 2016, Rockville, Maryland

    Energy Technology Data Exchange (ETDEWEB)

    Almgren, Ann [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); DeMar, Phil [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Vetter, Jeffrey [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Riley, Katherine [Argonne Leadership Computing Facility, Argonne, IL (United States); Antypas, Katie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bard, Deborah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Coffey, Richard [Argonne National Lab. (ANL), Argonne, IL (United States); Dart, Eli [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Science Network; Dosanjh, Sudip [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Gerber, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hack, James [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Monga, Inder [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Science Network; Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Rotman, Lauren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Science Network; Straatsma, Tjerk [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wells, Jack [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bethel, Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bosilca, George [Univ. of Tennessee, Knoxville, TN (United States); Cappello, Frank [Argonne National Lab. (ANL), Argonne, IL (United States); Gamblin, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Habib, Salman [Argonne National Lab. (ANL), Argonne, IL (United States); Hill, Judy [Oak Ridge Leadership Computing Facility, Oak Ridge, TN (United States); Hollingsworth, Jeffrey K. [Univ. of Maryland, College Park, MD (United States); McInnes, Lois Curfman [Argonne National Lab. (ANL), Argonne, IL (United States); Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Moore, Shirley [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Moreland, Ken [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Roser, Rob [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Shende, Sameer [Univ. of Oregon, Eugene, OR (United States); Shipman, Galen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-06-20

    The widespread use of computing in the American economy would not be possible without a thoughtful, exploratory research and development (R&D) community pushing the performance edge of operating systems, computer languages, and software libraries. These are the tools and building blocks — the hammers, chisels, bricks, and mortar — of the smartphone, the cloud, and the computing services on which we rely. Engineers and scientists need ever-more specialized computing tools to discover new material properties for manufacturing, make energy generation safer and more efficient, and provide insight into the fundamentals of the universe, for example. The research division of the U.S. Department of Energy’s (DOE’s) Office of Advanced Scientific Computing and Research (ASCR Research) ensures that these tools and building blocks are being developed and honed to meet the extreme needs of modern science. See also http://exascaleage.org/ascr/ for additional information.

  9. The rise of HPC accelerators: towards a common vision for a petascale future

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    Nowadays new exciting scientific discoveries are mainly driven by large challenging simulations. An analysis of the trends in High Performance Computing clearly show that we hit several barriers (CPU frequency, power consumption, technological limits, limitations of the present paradigms) that we cannot easily overcome. In this context, accelerators became the concrete alternative to increase the compute capabilities of the deployed HPC clusters inside Universities and research centers across Europe. Within the EC funded "Partnership of Advanced Computing in Europe" (PRACE) project, several actions has been taken and will be taken to enable community codes to exploit accelerators in modern HPC architectures. In this talk, the vision and the strategy adopted by the PRACE project will be presented, focusing on new HPC programming model and paradigm. Accelerators are a fundamental piece to innovate in this direction, from both the hardware and the software point of view. This work started dur...

  10. Trend Analysis of the Brazilian Scientific Production in Computer Science

    Directory of Open Access Journals (Sweden)

    TRUCOLO, C. C.

    2014-12-01

    Full Text Available The growth of scientific information volume and diversity brings new challenges in order to understand the reasons, the process and the real essence that propel this growth. This information can be used as the basis for the development of strategies and public politics to improve the education and innovation services. Trend analysis is one of the steps in this way. In this work, trend analysis of Brazilian scientific production of graduate programs in the computer science area is made to identify the main subjects being studied by these programs in general and individual ways.

  11. Implementation of Scientific Computing Applications on the Cell Broadband Engine

    Directory of Open Access Journals (Sweden)

    Guochun Shi

    2009-01-01

    Full Text Available The Cell Broadband Engine architecture is a revolutionary processor architecture well suited for many scientific codes. This paper reports on an effort to implement several traditional high-performance scientific computing applications on the Cell Broadband Engine processor, including molecular dynamics, quantum chromodynamics and quantum chemistry codes. The paper discusses data and code restructuring strategies necessary to adapt the applications to the intrinsic properties of the Cell processor and demonstrates performance improvements achieved on the Cell architecture. It concludes with the lessons learned and provides practical recommendations on optimization techniques that are believed to be most appropriate.

  12. Singularity hypotheses a scientific and philosophical assessment

    CERN Document Server

    Moor, James; Søraker, Johnny; Steinhart, Eric

    2012-01-01

    Singularity Hypotheses: A Scientific and Philosophical Assessment offers authoritative, jargon-free essays and critical commentaries on accelerating technological progress and the notion of technological singularity. It focuses on conjectures about the intelligence explosion, transhumanism, and whole brain emulation. Recent years have seen a plethora of forecasts about the profound, disruptive impact that is likely to result from further progress in these areas. Many commentators however doubt the scientific rigor of these forecasts, rejecting them as speculative and unfounded. We therefore invited prominent computer scientists, physicists, philosophers, biologists, economists and other thinkers to assess the singularity hypotheses. Their contributions go beyond speculation, providing deep insights into the main issues and a balanced picture of the debate.

  13. Molecular Science Computing Facility Scientific Challenges: Linking Across Scales

    Energy Technology Data Exchange (ETDEWEB)

    De Jong, Wibe A.; Windus, Theresa L.

    2005-07-01

    The purpose of this document is to define the evolving science drivers for performing environmental molecular research at the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and to provide guidance associated with the next-generation high-performance computing center that must be developed at EMSL's Molecular Science Computing Facility (MSCF) in order to address this critical research. The MSCF is the pre-eminent computing facility?supported by the U.S. Department of Energy's (DOE's) Office of Biological and Environmental Research (BER)?tailored to provide the fastest time-to-solution for current computational challenges in chemistry and biology, as well as providing the means for broad research in the molecular and environmental sciences. The MSCF provides integral resources and expertise to emerging EMSL Scientific Grand Challenges and Collaborative Access Teams that are designed to leverage the multiple integrated research capabilities of EMSL, thereby creating a synergy between computation and experiment to address environmental molecular science challenges critical to DOE and the nation.

  14. Scientific Computing Strategic Plan for the Idaho National Laboratory

    International Nuclear Information System (INIS)

    Whiting, Eric Todd

    2015-01-01

    Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory's (INL's) challenge and charge, and is central to INL's ongoing success. Computing is an essential part of INL's future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing number of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.

  15. 2-D and 3-D computations of curved accelerator magnets

    International Nuclear Information System (INIS)

    Turner, L.R.

    1991-01-01

    In order to save computer memory, a long accelerator magnet may be computed by treating the long central region and the end regions separately. The dipole magnets for the injector synchrotron of the Advanced Photon Source (APS), now under construction at Argonne National Laboratory (ANL), employ magnet iron consisting of parallel laminations, stacked with a uniform radius of curvature of 33.379 m. Laplace's equation for the magnetic scalar potential has a different form for a straight magnet (x-y coordinates), a magnet with surfaces curved about a common center (r-θ coordinates), and a magnet with parallel laminations like the APS injector dipole. Yet pseudo 2-D computations for the three geometries give basically identical results, even for a much more strongly curved magnet. Hence 2-D (x-y) computations of the central region and 3-D computations of the end regions can be combined to determine the overall magnetic behavior of the magnets. 1 ref., 6 figs

  16. Scientific and Computational Challenges of the Fusion Simulation Program (FSP)

    International Nuclear Information System (INIS)

    Tang, William M.

    2011-01-01

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Program (FSP) a major national initiative in the United States with the primary objective being to enable scientific discovery of important new plasma phenomena with associated understanding that emerges only upon integration. This requires developing a predictive integrated simulation capability for magnetically-confined fusion plasmas that are properly validated against experiments in regimes relevant for producing practical fusion energy. It is expected to provide a suite of advanced modeling tools for reliably predicting fusion device behavior with comprehensive and targeted science-based simulations of nonlinearly-coupled phenomena in the core plasma, edge plasma, and wall region on time and space scales required for fusion energy production. As such, it will strive to embody the most current theoretical and experimental understanding of magnetic fusion plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing the ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices with high physics fidelity on all relevant time and space scales. From a computational perspective, this will demand computing resources in the petascale range and beyond together with the associated multi-core algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative experiment involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics modeling projects (e

  17. Scientific and computational challenges of the fusion simulation program (FSP)

    International Nuclear Information System (INIS)

    Tang, William M.

    2011-01-01

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Program (FSP) - a major national initiative in the United States with the primary objective being to enable scientific discovery of important new plasma phenomena with associated understanding that emerges only upon integration. This requires developing a predictive integrated simulation capability for magnetically-confined fusion plasmas that are properly validated against experiments in regimes relevant for producing practical fusion energy. It is expected to provide a suite of advanced modeling tools for reliably predicting fusion device behavior with comprehensive and targeted science-based simulations of nonlinearly-coupled phenomena in the core plasma, edge plasma, and wall region on time and space scales required for fusion energy production. As such, it will strive to embody the most current theoretical and experimental understanding of magnetic fusion plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing the ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices with high physics fidelity on all relevant time and space scales. From a computational perspective, this will demand computing resources in the petascale range and beyond together with the associated multi-core algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative experiment involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics modeling projects (e

  18. DOE Advanced Scientific Computing Advisory Subcommittee (ASCAC) Report: Top Ten Exascale Research Challenges

    Energy Technology Data Exchange (ETDEWEB)

    Lucas, Robert [University of Southern California, Information Sciences Institute; Ang, James [Sandia National Laboratories; Bergman, Keren [Columbia University; Borkar, Shekhar [Intel; Carlson, William [Institute for Defense Analyses; Carrington, Laura [University of California, San Diego; Chiu, George [IBM; Colwell, Robert [DARPA; Dally, William [NVIDIA; Dongarra, Jack [University of Tennessee; Geist, Al [Oak Ridge National Laboratory; Haring, Rud [IBM; Hittinger, Jeffrey [Lawrence Livermore National Laboratory; Hoisie, Adolfy [Pacific Northwest National Laboratory; Klein, Dean Micron; Kogge, Peter [University of Notre Dame; Lethin, Richard [Reservoir Labs; Sarkar, Vivek [Rice University; Schreiber, Robert [Hewlett Packard; Shalf, John [Lawrence Berkeley National Laboratory; Sterling, Thomas [Indiana University; Stevens, Rick [Argonne National Laboratory; Bashor, Jon [Lawrence Berkeley National Laboratory; Brightwell, Ron [Sandia National Laboratories; Coteus, Paul [IBM; Debenedictus, Erik [Sandia National Laboratories; Hiller, Jon [Science and Technology Associates; Kim, K. H. [IBM; Langston, Harper [Reservoir Labs; Murphy, Richard Micron; Webster, Clayton [Oak Ridge National Laboratory; Wild, Stefan [Argonne National Laboratory; Grider, Gary [Los Alamos National Laboratory; Ross, Rob [Argonne National Laboratory; Leyffer, Sven [Argonne National Laboratory; Laros III, James [Sandia National Laboratories

    2014-02-10

    Exascale computing systems are essential for the scientific fields that will transform the 21st century global economy, including energy, biotechnology, nanotechnology, and materials science. Progress in these fields is predicated on the ability to perform advanced scientific and engineering simulations, and analyze the deluge of data. On July 29, 2013, ASCAC was charged by Patricia Dehmer, the Acting Director of the Office of Science, to assemble a subcommittee to provide advice on exascale computing. This subcommittee was directed to return a list of no more than ten technical approaches (hardware and software) that will enable the development of a system that achieves the Department's goals for exascale computing. Numerous reports over the past few years have documented the technical challenges and the non¬-viability of simply scaling existing computer designs to reach exascale. The technical challenges revolve around energy consumption, memory performance, resilience, extreme concurrency, and big data. Drawing from these reports and more recent experience, this ASCAC subcommittee has identified the top ten computing technology advancements that are critical to making a capable, economically viable, exascale system.

  19. Pelletron accelerator at Panjab University Chandigarh

    International Nuclear Information System (INIS)

    Singh, Nirmal; Mehta, Devinder

    2006-01-01

    The purpose of pelletron accelerator at Panjab University is to develop a low-energy accelerator laboratory within the university infrastructure. It will be housing a tandem electrostatic accelerator. The facility will bring together the available scientific expertise from a wide range of applications, viz. medical, biological and physical sciences and engineering that utilize accelerator-based technologies and techniques. It will play an important role in promoting integrated research and education across scientific disciplines available in the campus. (author)

  20. Quality assurance of analytical, scientific, and design computer programs for nuclear power plants

    International Nuclear Information System (INIS)

    1994-06-01

    This Standard applies to the design and development, modification, documentation, execution, and configuration management of computer programs used to perform analytical, scientific, and design computations during the design and analysis of safety-related nuclear power plant equipment, systems, structures, and components as identified by the owner. 2 figs

  1. Quality assurance of analytical, scientific, and design computer programs for nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-06-01

    This Standard applies to the design and development, modification, documentation, execution, and configuration management of computer programs used to perform analytical, scientific, and design computations during the design and analysis of safety-related nuclear power plant equipment, systems, structures, and components as identified by the owner. 2 figs.

  2. Designing Scientific Software for Heterogeneous Computing

    DEFF Research Database (Denmark)

    Glimberg, Stefan Lemvig

    , algorithms and data structures must be designed to utilize the underlying parallel architecture. The architectural changes in hardware design within the last decade, from single to multi and many-core architectures, require software developers to identify and properly implement methods that both exploit...... makes parallel software design applicable, but also a challenge for scientific software developers at all levels. We have developed a generic C++ library for fast prototyping of large-scale PDEs solvers based on flexible-order finite difference approximations on structured regular grids. The library...... is designed with a high abstraction interface to improve developer productivity. The library is based on modern template-based design concepts as described in Glimberg, Engsig-Karup, Nielsen & Dammann (2013). The library utilizes heterogeneous CPU/GPU environments in order to maximize computational throughput...

  3. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing

    Directory of Open Access Journals (Sweden)

    Fan Zhang

    2016-04-01

    Full Text Available With the development of synthetic aperture radar (SAR technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO. However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  4. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.

    Science.gov (United States)

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-04-07

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  5. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    International Nuclear Information System (INIS)

    Khaleel, Mohammad A.

    2009-01-01

    This report is an account of the deliberations and conclusions of the workshop on 'Forefront Questions in Nuclear Science and the Role of High Performance Computing' held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to (1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; (2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; (3) provide nuclear physicists the opportunity to influence the development of high performance computing; and (4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  6. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  7. Computer-Supported Aids to Making Sense of Scientific Articles: Cognitive, Motivational, and Attitudinal Effects

    Science.gov (United States)

    Gegner, Julie A.; Mackay, Donald H. J.; Mayer, Richard E.

    2009-01-01

    High school students can access original scientific research articles on the Internet, but may have trouble understanding them. To address this problem of online literacy, the authors developed a computer-based prototype for guiding students' comprehension of scientific articles. High school students were asked to read an original scientific…

  8. Extraordinary Tools for Extraordinary Science: The Impact ofSciDAC on Accelerator Science&Technology

    Energy Technology Data Exchange (ETDEWEB)

    Ryne, Robert D.

    2006-08-10

    Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook''. Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now take hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.

  9. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    International Nuclear Information System (INIS)

    Bagnasco, S; Berzano, D; Brunetti, R; Lusso, S; Vallero, S

    2014-01-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  10. Certainty in Stockpile Computing: Recommending a Verification and Validation Program for Scientific Software

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J.R.

    1998-11-01

    As computing assumes a more central role in managing the nuclear stockpile, the consequences of an erroneous computer simulation could be severe. Computational failures are common in other endeavors and have caused project failures, significant economic loss, and loss of life. This report examines the causes of software failure and proposes steps to mitigate them. A formal verification and validation program for scientific software is recommended and described.

  11. DUBNA-GRAN SASSO: Satellite computer link

    International Nuclear Information System (INIS)

    Anon.

    1994-01-01

    In April a 64 kbit/s computer communication link was set up between the Joint Institute for Nuclear Research (JINR), Dubna (Russia) and Gran Sasso (Italy) Laboratories via nearby ground satellite stations using the INTELSAT V satellite. Previously the international community of Dubna's experimentalists and theorists (high energy physics, condensed matter physics, low energy nuclear and neutron physics, accelerator and applied nuclear physics) had no effective computer links with scientific centres worldwide

  12. Development and application of a computer model for large-scale flame acceleration experiments

    International Nuclear Information System (INIS)

    Marx, K.D.

    1987-07-01

    A new computational model for large-scale premixed flames is developed and applied to the simulation of flame acceleration experiments. The primary objective is to circumvent the necessity for resolving turbulent flame fronts; this is imperative because of the relatively coarse computational grids which must be used in engineering calculations. The essence of the model is to artificially thicken the flame by increasing the appropriate diffusivities and decreasing the combustion rate, but to do this in such a way that the burn velocity varies with pressure, temperature, and turbulence intensity according to prespecified phenomenological characteristics. The model is particularly aimed at implementation in computer codes which simulate compressible flows. To this end, it is applied to the two-dimensional simulation of hydrogen-air flame acceleration experiments in which the flame speeds and gas flow velocities attain or exceed the speed of sound in the gas. It is shown that many of the features of the flame trajectories and pressure histories in the experiments are simulated quite well by the model. Using the comparison of experimental and computational results as a guide, some insight is developed into the processes which occur in such experiments. 34 refs., 25 figs., 4 tabs

  13. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    KAUST Repository

    Downes, Turlough P.; Roller, Sabine P.; Seitsonen, Ari Paavo; Valcke, Sophie; Keyes, David E.; Sawley, Marie Christine; Schulthess, Thomas C.; Shalf, John M.

    2013-01-01

    and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators

  14. Can Accelerators Accelerate Learning?

    International Nuclear Information System (INIS)

    Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.

    2009-01-01

    The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ)[1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.

  15. Can Accelerators Accelerate Learning?

    Science.gov (United States)

    Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.

    2009-03-01

    The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ) [1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.

  16. Accelerating Scientific Advancement for Pediatric Rare Lung Disease Research. Report from a National Institutes of Health-NHLBI Workshop, September 3 and 4, 2015.

    Science.gov (United States)

    Young, Lisa R; Trapnell, Bruce C; Mandl, Kenneth D; Swarr, Daniel T; Wambach, Jennifer A; Blaisdell, Carol J

    2016-12-01

    Pediatric rare lung disease (PRLD) is a term that refers to a heterogeneous group of rare disorders in children. In recent years, this field has experienced significant progress marked by scientific discoveries, multicenter and interdisciplinary collaborations, and efforts of patient advocates. Although genetic mechanisms underlie many PRLDs, pathogenesis remains uncertain for many of these disorders. Furthermore, epidemiology and natural history are insufficiently defined, and therapies are limited. To develop strategies to accelerate scientific advancement for PRLD research, the NHLBI of the National Institutes of Health convened a strategic planning workshop on September 3 and 4, 2015. The workshop brought together a group of scientific experts, intramural and extramural investigators, and advocacy groups with the following objectives: (1) to discuss the current state of PRLD research; (2) to identify scientific gaps and barriers to increasing research and improving outcomes for PRLDs; (3) to identify technologies, tools, and reagents that could be leveraged to accelerate advancement of research in this field; and (4) to develop priorities for research aimed at improving patient outcomes and quality of life. This report summarizes the workshop discussion and provides specific recommendations to guide future research in PRLD.

  17. Acceleration of iterative tomographic reconstruction using graphics processors

    International Nuclear Information System (INIS)

    Belzunce, M.A.; Osorio, A.; Verrastro, C.A.

    2009-01-01

    Using iterative algorithms for image reconstruction in 3 D Positron Emission Tomography has shown to produce images with better quality than analytical methods. How ever, these algorithms are computationally expensive. New Graphic Processor Units (GPU) provides high performance at low cost and also programming tools that make possible to execute parallel algorithms easily in scientific applications. In this work, we try to achieve an acceleration of image reconstruction algorithms in 3 D PET by using a GPU. A parallel implementation of the algorithm ML-EM 3 D was developed using Siddon algorithm as Projector and Back-projector. Results show that accelerations of more than one order of magnitude can be achieved, keeping similar image quality. (author)

  18. Space and Earth Sciences, Computer Systems, and Scientific Data Analysis Support, Volume 1

    Science.gov (United States)

    Estes, Ronald H. (Editor)

    1993-01-01

    This Final Progress Report covers the specific technical activities of Hughes STX Corporation for the last contract triannual period of 1 June through 30 Sep. 1993, in support of assigned task activities at Goddard Space Flight Center (GSFC). It also provides a brief summary of work throughout the contract period of performance on each active task. Technical activity is presented in Volume 1, while financial and level-of-effort data is presented in Volume 2. Technical support was provided to all Division and Laboratories of Goddard's Space Sciences and Earth Sciences Directorates. Types of support include: scientific programming, systems programming, computer management, mission planning, scientific investigation, data analysis, data processing, data base creation and maintenance, instrumentation development, and management services. Mission and instruments supported include: ROSAT, Astro-D, BBXRT, XTE, AXAF, GRO, COBE, WIND, UIT, SMM, STIS, HEIDI, DE, URAP, CRRES, Voyagers, ISEE, San Marco, LAGEOS, TOPEX/Poseidon, Pioneer-Venus, Galileo, Cassini, Nimbus-7/TOMS, Meteor-3/TOMS, FIFE, BOREAS, TRMM, AVHRR, and Landsat. Accomplishments include: development of computing programs for mission science and data analysis, supercomputer applications support, computer network support, computational upgrades for data archival and analysis centers, end-to-end management for mission data flow, scientific modeling and results in the fields of space and Earth physics, planning and design of GSFC VO DAAC and VO IMS, fabrication, assembly, and testing of mission instrumentation, and design of mission operations center.

  19. Computer-assisted estimating for the Los Alamos Scientific Laboratory

    International Nuclear Information System (INIS)

    Spooner, J.E.

    1976-02-01

    An analysis is made of the cost estimating system currently in use at the Los Alamos Scientific Laboratory (LASL) and the benefits of computer assistance are evaluated. A computer-assisted estimating system (CAE) is proposed for LASL. CAE can decrease turnaround and provide more flexible response to management requests for cost information and analyses. It can enhance value optimization at the design stage, improve cost control and change-order justification, and widen the use of cost information in the design process. CAE costs are not well defined at this time although they appear to break even with present operations. It is recommended that a CAE system description be submitted for contractor consideration and bid while LASL system development continues concurrently

  20. Accelerating MATLAB with GPU computing a primer with examples

    CERN Document Server

    Suh, Jung W

    2013-01-01

    Beyond simulation and algorithm development, many developers increasingly use MATLAB even for product deployment in computationally heavy fields. This often demands that MATLAB codes run faster by leveraging the distributed parallelism of Graphics Processing Units (GPUs). While MATLAB successfully provides high-level functions as a simulation tool for rapid prototyping, the underlying details and knowledge needed for utilizing GPUs make MATLAB users hesitate to step into it. Accelerating MATLAB with GPUs offers a primer on bridging this gap. Starting with the basics, setting up MATLAB for

  1. Proceedings of the conference on computer codes and the linear accelerator community

    International Nuclear Information System (INIS)

    Cooper, R.K.

    1990-07-01

    The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned

  2. Proceedings of the conference on computer codes and the linear accelerator community

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, R.K. (comp.)

    1990-07-01

    The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned.

  3. Strategic Plan for a Scientific Cloud Computing infrastructure for Europe

    CERN Document Server

    Lengert, Maryline

    2011-01-01

    Here we present the vision, concept and direction for forming a European Industrial Strategy for a Scientific Cloud Computing Infrastructure to be implemented by 2020. This will be the framework for decisions and for securing support and approval in establishing, initially, an R&D European Cloud Computing Infrastructure that serves the need of European Research Area (ERA ) and Space Agencies. This Cloud Infrastructure will have the potential beyond this initial user base to evolve to provide similar services to a broad range of customers including government and SMEs. We explain how this plan aims to support the broader strategic goals of our organisations and identify the benefits to be realised by adopting an industrial Cloud Computing model. We also outline the prerequisites and commitment needed to achieve these objectives.

  4. Mastering scientific computing with R

    CERN Document Server

    Gerrard, Paul

    2015-01-01

    If you want to learn how to quantitatively answer scientific questions for practical purposes using the powerful R language and the open source R tool ecosystem, this book is ideal for you. It is ideally suited for scientists who understand scientific concepts, know a little R, and want to be able to start applying R to be able to answer empirical scientific questions. Some R exposure is helpful, but not compulsory.

  5. Extraordinary tools for extraordinary science: the impact of SciDAC on accelerator science and technology

    International Nuclear Information System (INIS)

    Ryne, Robert D

    2006-01-01

    Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, 'Facilities for the Future of Science: A Twenty-Year Outlook'. Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now take hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects

  6. Extraordinary tools for extraordinary science: the impact of SciDAC on accelerator science and technology

    Science.gov (United States)

    Ryne, Robert D.

    2006-09-01

    Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook.'' Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now take hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.

  7. Extraordinary Tools for Extraordinary Science: The Impact of SciDAC on Accelerator Science and Technology

    International Nuclear Information System (INIS)

    Ryne, Robert D.

    2006-01-01

    Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook''. Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now take hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects

  8. ACE3P Computations of Wakefield Coupling in the CLIC Two-Beam Accelerator

    International Nuclear Information System (INIS)

    Candel, Arno

    2010-01-01

    The Compact Linear Collider (CLIC) provides a path to a multi-TeV accelerator to explore the energy frontier of High Energy Physics. Its novel two-beam accelerator concept envisions rf power transfer to the accelerating structures from a separate high-current decelerator beam line consisting of power extraction and transfer structures (PETS). It is critical to numerically verify the fundamental and higher-order mode properties in and between the two beam lines with high accuracy and confidence. To solve these large-scale problems, SLAC's parallel finite element electromagnetic code suite ACE3P is employed. Using curvilinear conformal meshes and higher-order finite element vector basis functions, unprecedented accuracy and computational efficiency are achieved, enabling high-fidelity modeling of complex detuned structures such as the CLIC TD24 accelerating structure. In this paper, time-domain simulations of wakefield coupling effects in the combined system of PETS and the TD24 structures are presented. The results will help to identify potential issues and provide new insights on the design, leading to further improvements on the novel CLIC two-beam accelerator scheme.

  9. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    International Nuclear Information System (INIS)

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt; Larson, Krista; Sfiligoi, Igor; Rynge, Mats

    2014-01-01

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by 'Big Data' will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.

  10. 10th International Conference on Scientific Computing in Electrical Engineering

    CERN Document Server

    Clemens, Markus; Günther, Michael; Maten, E

    2016-01-01

    This book is a collection of selected papers presented at the 10th International Conference on Scientific Computing in Electrical Engineering (SCEE), held in Wuppertal, Germany in 2014. The book is divided into five parts, reflecting the main directions of SCEE 2014: 1. Device Modeling, Electric Circuits and Simulation, 2. Computational Electromagnetics, 3. Coupled Problems, 4. Model Order Reduction, and 5. Uncertainty Quantification. Each part starts with a general introduction followed by the actual papers. The aim of the SCEE 2014 conference was to bring together scientists from academia and industry, mathematicians, electrical engineers, computer scientists, and physicists, with the goal of fostering intensive discussions on industrially relevant mathematical problems, with an emphasis on the modeling and numerical simulation of electronic circuits and devices, electromagnetic fields, and coupled problems. The methodological focus was on model order reduction and uncertainty quantification.

  11. Department of Accelerator Physics and Technology: Overview

    International Nuclear Information System (INIS)

    Pachan, M.

    2001-01-01

    Full text: In view of limited number of scientific and technical staff, it was necessary to focus the activity on most important subjects and to keep balance between current duties and development of future projects. The dominant item was realisation of research and designing works in the Ordered Project for New Therapeutical Accelerator with two energies of photon beam 6 and 15 MeV. During the reported year, main efforts were oriented on: - computation and experimental works on optimization of electron gun parameters and electron optics in the injection system for accelerating structure, - calculation and modelling of standing wave, S-band accelerating structure to achieve broad range of electron energy variation with good phase acceptance and narrow energy spectrum of the output beam, - calculation and design of beam focusing and transport system, with deflection of the output beam for 2700 in achromatic sector magnet, - design and modelling of microwave power system, with pilot generator, klystron 6 MW amplifier, pulse modulator, waveguide system, four-port circulator and automatic frequency control, - preparative works on metrological procedures and apparatus for accelerated beam diagnostics comprising measurements of energy spectrum, beam intensity, transmission factor, leakage radiation, and other important beam parameters. Other important subject, worth mentioning are: - Advance in forming and metrology of narrow X-ray photon beams, dedicated to stereotactic radiosurgery and radiotherapy, - Adaptation of a new version of EGS-4, MC type code for computer simulation of dose distribution in therapeutical beams, - Participation in selected items of the TESLA Project in cooperation with DESY - Hamburg, - theory and computer simulation of higher order modes in superconducting accelerating structures, - technological research of methods and apparatus for thin layer coating of r.f. resonators and subunits in transmission circuits - Conceptual studies of proposed new

  12. Software for virtual accelerator designing

    International Nuclear Information System (INIS)

    Kulabukhova, N.; Ivanov, A.; Korkhov, V.; Lazarev, A.

    2012-01-01

    The article discusses appropriate technologies for software implementation of the Virtual Accelerator. The Virtual Accelerator is considered as a set of services and tools enabling transparent execution of computational software for modeling beam dynamics in accelerators on distributed computing resources. Distributed storage and information processing facilities utilized by the Virtual Accelerator make use of the Service-Oriented Architecture (SOA) according to a cloud computing paradigm. Control system tool-kits (such as EPICS, TANGO), computing modules (including high-performance computing), realization of the GUI with existing frameworks and visualization of the data are discussed in the paper. The presented research consists of software analysis for realization of interaction between all levels of the Virtual Accelerator and some samples of middle-ware implementation. A set of the servers and clusters at St.-Petersburg State University form the infrastructure of the computing environment for Virtual Accelerator design. Usage of component-oriented technology for realization of Virtual Accelerator levels interaction is proposed. The article concludes with an overview and substantiation of a choice of technologies that will be used for design and implementation of the Virtual Accelerator. (authors)

  13. Using Just-in-Time Information to Support Scientific Discovery Learning in a Computer-Based Simulation

    Science.gov (United States)

    Hulshof, Casper D.; de Jong, Ton

    2006-01-01

    Students encounter many obstacles during scientific discovery learning with computer-based simulations. It is hypothesized that an effective type of support, that does not interfere with the scientific discovery learning process, should be delivered on a "just-in-time" base. This study explores the effect of facilitating access to…

  14. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing

    Science.gov (United States)

    Fang, Ye; Ding, Yun; Feinstein, Wei P.; Koppelman, David M.; Moreno, Juana; Jarrell, Mark; Ramanujam, J.; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249. PMID:27420300

  15. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.

    Directory of Open Access Journals (Sweden)

    Ye Fang

    Full Text Available Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU. First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.

  16. Combining Acceleration Techniques for Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction.

    Science.gov (United States)

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2017-01-01

    Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.

  17. Science Prospects And Benefits with Exascale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Kothe, Douglas B [ORNL

    2007-12-01

    Scientific computation has come into its own as a mature technology in all fields of science. Never before have we been able to accurately anticipate, analyze, and plan for complex events that have not yet occurred from the operation of a reactor running at 100 million degrees centigrade to the changing climate a century down the road. Combined with the more traditional approaches of theory and experiment, scientific computation provides a profound tool for insight and solution as we look at complex systems containing billions of components. Nevertheless, it cannot yet do all we would like. Much of scientific computation s potential remains untapped in areas such as materials science, Earth science, energy assurance, fundamental science, biology and medicine, engineering design, and national security because the scientific challenges are far too enormous and complex for the computational resources at hand. Many of these challenges are of immediate global importance. These challenges can be overcome by a revolution in computing that promises real advancement at a greatly accelerated pace. Planned petascale systems (capable of a petaflop, or 1015 floating point operations per second) in the next 3 years and exascale systems (capable of an exaflop, or 1018 floating point operations per second) in the next decade will provide an unprecedented opportunity to attack these global challenges through modeling and simulation. Exascale computers, with a processing capability similar to that of the human brain, will enable the unraveling of longstanding scientific mysteries and present new opportunities. Table ES.1 summarizes these scientific opportunities, their key application areas, and the goals and associated benefits that would result from solutions afforded by exascale computing.

  18. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  19. Historic Learning Approach for Auto-tuning OpenACC Accelerated Scientific Applications

    KAUST Repository

    Siddiqui, Shahzeb

    2015-04-17

    The performance optimization of scientific applications usually requires an in-depth knowledge of the hardware and software. A performance tuning mechanism is suggested to automatically tune OpenACC parameters to adapt to the execution environment on a given system. A historic learning based methodology is suggested to prune the parameter search space for a more efficient auto-tuning process. This approach is applied to tune the OpenACC gang and vector clauses for a better mapping of the compute kernels onto the underlying architecture. Our experiments show a significant performance improvement against the default compiler parameters and drastic reduction in tuning time compared to a brute force search-based approach.

  20. Automatic Beam Path Analysis of Laser Wakefield Particle Acceleration Data

    Energy Technology Data Exchange (ETDEWEB)

    Rubel, Oliver; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Wu, Kesheng; Prabhat,; Weber, Gunther H.; Ushizima, Daniela M.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes

    2009-10-19

    Numerical simulations of laser wakefield particle accelerators play a key role in the understanding of the complex acceleration process and in the design of expensive experimental facilities. As the size and complexity of simulation output grows, an increasingly acute challenge is the practical need for computational techniques that aid in scientific knowledge discovery. To that end, we present a set of data-understanding algorithms that work in concert in a pipeline fashion to automatically locate and analyze high energy particle bunches undergoing acceleration in very large simulation datasets. These techniques work cooperatively by first identifying features of interest in individual timesteps, then integrating features across timesteps, and based on the information derived perform analysis of temporally dynamic features. This combination of techniques supports accurate detection of particle beams enabling a deeper level of scientific understanding of physical phenomena than hasbeen possible before. By combining efficient data analysis algorithms and state-of-the-art data management we enable high-performance analysis of extremely large particle datasets in 3D. We demonstrate the usefulness of our methods for a variety of 2D and 3D datasets and discuss the performance of our analysis pipeline.

  1. Convergence acceleration of two-phase flow calculations in FLICA-4. A thermal-hydraulic 3D computer code

    International Nuclear Information System (INIS)

    Toumi, I.

    1995-01-01

    Time requirements for 3D two-phase flow steady state calculations are generally long. Usually, numerical methods for steady state problems are iterative methods consisting in time-like methods that are marched to a steady state. Based on the eigenvalue spectrum of the iteration matrix for various flow configuration, two convergence acceleration techniques are discussed; over-relaxation and eigenvalue annihilation. This methods were applied to accelerate the convergence of three dimensional steady state two-phase flow calculations within the FLICA-4 computer code. These acceleration methods are easy to implement and no extra computer memory is required. Successful results are presented for various test problems and a saving of 30 to 50 % in CPU time have been achieved. (author). 10 refs., 4 figs

  2. Towards full automation of accelerators through computer control

    CERN Document Server

    Gamble, J; Kemp, D; Keyser, R; Koutchouk, Jean-Pierre; Martucci, P P; Tausch, Lothar A; Vos, L

    1980-01-01

    The computer control system of the Intersecting Storage Rings (ISR) at CERN has always laid emphasis on two particular operational aspects, the first being the reproducibility of machine conditions and the second that of giving the operators the possibility to work in terms of machine parameters such as the tune. Already certain phases of the operation are optimized by the control system, whilst others are automated with a minimum of manual intervention. The authors describe this present control system with emphasis on the existing automated facilities and the features of the control system which make it possible. It then discusses the steps needed to completely automate the operational procedure of accelerators. (7 refs).

  3. Towards full automation of accelerators through computer control

    International Nuclear Information System (INIS)

    Gamble, J.; Hemery, J.-Y.; Kemp, D.; Keyser, R.; Koutchouk, J.-P.; Martucci, P.; Tausch, L.; Vos, L.

    1980-01-01

    The computer control system of the Intersecting Storage Rings (ISR) at CERN has always laid emphasis on two particular operational aspects, the first being the reproducibility of machine conditions and the second that of giving the operators the possibility to work in terms of machine parameters such as the tune. Already certain phases of the operation are optimized by the control system, whilst others are automated with a minimum of manual intervention. The paper describes this present control system with emphasis on the existing automated facilities and the features of the control system which make it possible. It then discusses the steps needed to completely automate the operational procedure of accelerators. (Auth.)

  4. Space experiments with particle accelerators: SEPAC

    International Nuclear Information System (INIS)

    Obayashi, T.

    1978-01-01

    In this paper, the program of the space experiments with particle accelerators (SEPAC) is described. The SEPAC is to be prepared for the Space Shuttle/First Spacelab Mission. It is planned in the SEPAC to carry out the active and interactive experiments on and in the Earth's ionosphere and magnetosphere. It is also intended to make an initial performance test for the overall program of Spacelab/SEPAC experiments. The instruments to be used are electron beam accelerators, MPD arcjects, and associated diagnostic equipments. The main scientific objectives of the experiments are Vehicle Charge Neutralization, Beam Plasma Physics, and Beam Atmosphere Interactions. The SEPAC system consists of the following subsystems. Those are accelerators, monitoring and diagnostic equipments, and control and data management equipments. The SEPAC functional objectives for experiment operations are SEPAC system checkout, EBA firing test, MPD firing test, electron beam experiments, plasma beam propagation, artificial aurora excitation, equatorial aerochemistry, electron echo experiment, E parallel B experiment, passive experiments, SEPAC system deactivation, and battery charging. Most experiment procedures are carried out by the pre-set computer program. (Kato, T.)

  5. Electromagnetic computer simulations of collective ion acceleration by a relativistic electron beam

    International Nuclear Information System (INIS)

    Galvez, M.; Gisler, G.R.

    1988-01-01

    A 2.5 electromagnetic particle-in-cell computer code is used to study the collective ion acceleration when a relativistic electron beam is injected into a drift tube partially filled with cold neutral plasma. The simulations of this system reveals that the ions are subject to electrostatic acceleration by an electrostatic potential that forms behind the head of the beam. This electrostatic potential develops soon after the beam is injected into the drift tube, drifts with the beam, and eventually settles to a fixed position. At later times, this electrostatic potential becomes a virtual cathode. When the permanent position of the electrostatic potential is at the edge of the plasma or further up, then ions are accelerated forward and a unidirectional ion flow is obtained otherwise a bidirectional ion flow occurs. The ions that achieve higher energy are those which drift with the negative potential. When the plasma density is varied, the simulations show that optimum acceleration occurs when the density ratio between the beam (n b ) and the plasma (n o ) is unity. Simulations were carried out by changing the ion mass. The results of these simulations corroborate the hypothesis that the ion acceleration mechanism is purely electrostatic, so that the ion acceleration depends inversely on the charge particle mass. The simulations also show that the ion maximum energy increased logarithmically with the electron beam energy and proportional with the beam current

  6. Tools for remote computing in accelerator control

    International Nuclear Information System (INIS)

    Anderssen, P.S.; Frammery, V.; Wilcke, R.

    1990-01-01

    In modern accelerator control systems, the intelligence of the equipment is distributed in the geographical and the logical sense. Control processes for a large variety of tasks reside in both the equipment and the control computers. Hence successful operation hinges on the availability and reliability of the communication infrastructure. The computers are interconnected by a communication system and use remote procedure calls and message passing for information exchange. These communication mechanisms need a well-defined convention, i.e. a protocol. They also require flexibility in both the setup and changes to the protocol specification. The network compiler is a tool which provides the programmer with a means of establishing such a protocol for his application. Input to the network compiler is a single interface description file provided by the programmer. This file is written according to a grammar, and completely specifies the interprocess communication interfaces. Passed through the network compiler, the interface description file automatically produces the additional source code needed for the protocol. Hence the programmer does not have to be concerned about the details of the communication calls. Any further additions and modifications are made easy, because all the information about the interface is kept in a single file. (orig.)

  7. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    CERN Document Server

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  8. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    Science.gov (United States)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  9. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Muzaffar, Shahzad; Knight, Robert

    2015-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG). (paper)

  10. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm.

    Science.gov (United States)

    Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.

  11. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm

    Science.gov (United States)

    Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239

  12. New Chicago-Indiana computer network will handle dataflow from world's largest scientific experiment

    CERN Multimedia

    2006-01-01

    "Massive quantities of data will soon begin flowing from the largest scientific instrument ever built into an international netword of computer centers, including one operated jointly by the University of Chicago and Indiana University." (1,5 page)

  13. Acceleration of Radiance for Lighting Simulation by Using Parallel Computing with OpenCL

    Energy Technology Data Exchange (ETDEWEB)

    Zuo, Wangda; McNeil, Andrew; Wetter, Michael; Lee, Eleanor

    2011-09-06

    We report on the acceleration of annual daylighting simulations for fenestration systems in the Radiance ray-tracing program. The algorithm was optimized to reduce both the redundant data input/output operations and the floating-point operations. To further accelerate the simulation speed, the calculation for matrix multiplications was implemented using parallel computing on a graphics processing unit. We used OpenCL, which is a cross-platform parallel programming language. Numerical experiments show that the combination of the above measures can speed up the annual daylighting simulations 101.7 times or 28.6 times when the sky vector has 146 or 2306 elements, respectively.

  14. Plasma accelerators

    International Nuclear Information System (INIS)

    Bingham, R.; Angelis, U. de; Johnston, T.W.

    1991-01-01

    Recently attention has focused on charged particle acceleration in a plasma by a fast, large amplitude, longitudinal electron plasma wave. The plasma beat wave and plasma wakefield accelerators are two efficient ways of producing ultra-high accelerating gradients. Starting with the plasma beat wave accelerator (PBWA) and laser wakefield accelerator (LWFA) schemes and the plasma wakefield accelerator (PWFA) steady progress has been made in theory, simulations and experiments. Computations are presented for the study of LWFA. (author)

  15. Acceleration of Cherenkov angle reconstruction with the new Intel Xeon/FPGA compute platform for the particle identification in the LHCb Upgrade

    Science.gov (United States)

    Faerber, Christian

    2017-10-01

    The LHCb experiment at the LHC will upgrade its detector by 2018/2019 to a ‘triggerless’ readout scheme, where all the readout electronics and several sub-detector parts will be replaced. The new readout electronics will be able to readout the detector at 40 MHz. This increases the data bandwidth from the detector down to the Event Filter farm to 40 TBit/s, which also has to be processed to select the interesting proton-proton collision for later storage. The architecture of such a computing farm, which can process this amount of data as efficiently as possible, is a challenging task and several compute accelerator technologies are being considered for use inside the new Event Filter farm. In the high performance computing sector more and more FPGA compute accelerators are used to improve the compute performance and reduce the power consumption (e.g. in the Microsoft Catapult project and Bing search engine). Also for the LHCb upgrade the usage of an experimental FPGA accelerated computing platform in the Event Building or in the Event Filter farm is being considered and therefore tested. This platform from Intel hosts a general CPU and a high performance FPGA linked via a high speed link which is for this platform a QPI link. On the FPGA an accelerator is implemented. The used system is a two socket platform from Intel with a Xeon CPU and an FPGA. The FPGA has cache-coherent memory access to the main memory of the server and can collaborate with the CPU. As a first step, a computing intensive algorithm to reconstruct Cherenkov angles for the LHCb RICH particle identification was successfully ported in Verilog to the Intel Xeon/FPGA platform and accelerated by a factor of 35. The same algorithm was ported to the Intel Xeon/FPGA platform with OpenCL. The implementation work and the performance will be compared. Also another FPGA accelerator the Nallatech 385 PCIe accelerator with the same Stratix V FPGA were tested for performance. The results show that the Intel

  16. Computational Simulations and the Scientific Method

    Science.gov (United States)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  17. Wideroe pre-accelerator for the SuperHILAC

    International Nuclear Information System (INIS)

    Staples, J.; Alonso, J.; Behrsing, G.; Clark, D.; Grunder, H.; Olivier, M.; Spence, D.; Yourd, R.

    1976-09-01

    In 1971 the Bevatron successfully accelerated low-intensity heavy ion beams up to neon to energies of 2.1 GeV/amu. More recently, beams up to argon have been accelerated using the SuperHILAC as an injector to the Bevatron--the Bevalac concept. With increasing scientific interest in high-energy high-intensity beams of heavier ions, plans to upgrade both the Bevatron vacuum system and the SuperHILAC ion sources and injectors have been formulated. A proposed new pre-accelerator based on an air-insulated Cockcroft-Walton and a Wideroe linac is presented. The Wideroe linac uses the design concepts established at UNILAC, modified for frequency and energy requirements. U 7 + from the ion source is accelerated from 12 keV/amu to 113 keV/amu and stripped to a mean charge state acceptable to the first tank of the SuperHILAC. The expected intensity improvement over the present pressurized injector is a factor of 100 at the highest masses. The physical modeling of the Wideroe linac structure will be kept to a minimum. Computer models predicting the characteristics of the structure have improved to the point where the probability of satisfactory performance is high

  18. Applications of particle accelerators

    International Nuclear Information System (INIS)

    Barbalat, O.

    1994-01-01

    Particle accelerators are now widely used in a variety of applications for scientific research, applied physics, medicine, industrial processing, while possible utilisation in power engineering is envisaged. Earlier presentations of this subject, given at previous CERN Accelerator School sessions have been updated with papers contributed to the first European Conference on Accelerators in Applied Research and Technology (ECAART) held in September 1989 in Frankfurt and to the Second European Particle Accelerator Conference in Nice in June 1990. (orig.)

  19. Computing Principal Eigenvectors of Large Web Graphs: Algorithms and Accelerations Related to PageRank and HITS

    Science.gov (United States)

    Nagasinghe, Iranga

    2010-01-01

    This thesis investigates and develops a few acceleration techniques for the search engine algorithms used in PageRank and HITS computations. PageRank and HITS methods are two highly successful applications of modern Linear Algebra in computer science and engineering. They constitute the essential technologies accounted for the immense growth and…

  20. Bringing Federated Identity to Grid Computing

    Energy Technology Data Exchange (ETDEWEB)

    Teheran, Jeny [Fermilab

    2016-03-04

    The Fermi National Accelerator Laboratory (FNAL) is facing the challenge of providing scientific data access and grid submission to scientific collaborations that span the globe but are hosted at FNAL. Users in these collaborations are currently required to register as an FNAL user and obtain FNAL credentials to access grid resources to perform their scientific computations. These requirements burden researchers with managing additional authentication credentials, and put additional load on FNAL for managing user identities. Our design integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and MyProxy with the FNAL grid submission system to provide secure access for users from diverse experiments and collab orations without requiring each user to have authentication credentials from FNAL. The design automates the handling of certificates so users do not need to manage them manually. Although the initial implementation is for FNAL's grid submission system, the design and the core of the implementation are general and could be applied to other distributed computing systems.

  1. Visual computing scientific visualization and imaging systems

    CERN Document Server

    2014-01-01

    This volume aims to stimulate discussions on research involving the use of data and digital images as an understanding approach for analysis and visualization of phenomena and experiments. The emphasis is put not only on graphically representing data as a way of increasing its visual analysis, but also on the imaging systems which contribute greatly to the comprehension of real cases. Scientific Visualization and Imaging Systems encompass multidisciplinary areas, with applications in many knowledge fields such as Engineering, Medicine, Material Science, Physics, Geology, Geographic Information Systems, among others. This book is a selection of 13 revised and extended research papers presented in the International Conference on Advanced Computational Engineering and Experimenting -ACE-X conferences 2010 (Paris), 2011 (Algarve), 2012 (Istanbul) and 2013 (Madrid). The examples were particularly chosen from materials research, medical applications, general concepts applied in simulations and image analysis and ot...

  2. An integrated IaaS and PaaS architecture for scientific computing

    OpenAIRE

    Donvito, Giacinto; Blanquer, Ignacio

    2015-01-01

    Scientific applications often require multiple computing resources deployed on a coordinated way. The deployment of multiple resources require installing and configuring special software applications which should be updated when changes in the virtual infrastructure take place. When working on hybrid and federated cloud environments, restrictions on the hypervisor or cloud management platform must be minimised to facilitate geographic-wide brokering and cross-site deployments. Moreover, prese...

  3. Concepts and techniques: Active electronics and computers in safety-critical accelerator operation

    International Nuclear Information System (INIS)

    Frankel, R.S.

    1995-01-01

    The Relativistic Heavy Ion Collider (RHIC) under construction at Brookhaven National Laboratory, requires an extensive Access Control System to protect personnel from Radiation, Oxygen Deficiency and Electrical hazards. In addition, the complicated nature of operation of the Collider as part of a complex of other Accelerators necessitates the use of active electronic measurement circuitry to ensure compliance with established Operational Safety Limits. Solutions were devised which permit the use of modern computer and interconnections technology for Safety-Critical applications, while preserving and enhancing, tried and proven protection methods. In addition a set of Guidelines, regarding required performance for Accelerator Safety Systems and a Handbook of design criteria and rules were developed to assist future system designers and to provide a framework for internal review and regulation

  4. Concepts and techniques: Active electronics and computers in safety-critical accelerator operation

    Energy Technology Data Exchange (ETDEWEB)

    Frankel, R.S.

    1995-12-31

    The Relativistic Heavy Ion Collider (RHIC) under construction at Brookhaven National Laboratory, requires an extensive Access Control System to protect personnel from Radiation, Oxygen Deficiency and Electrical hazards. In addition, the complicated nature of operation of the Collider as part of a complex of other Accelerators necessitates the use of active electronic measurement circuitry to ensure compliance with established Operational Safety Limits. Solutions were devised which permit the use of modern computer and interconnections technology for Safety-Critical applications, while preserving and enhancing, tried and proven protection methods. In addition a set of Guidelines, regarding required performance for Accelerator Safety Systems and a Handbook of design criteria and rules were developed to assist future system designers and to provide a framework for internal review and regulation.

  5. Scientific and computational challenges of the fusion simulation project (FSP)

    International Nuclear Information System (INIS)

    Tang, W M

    2008-01-01

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Project (FSP). The primary objective is to develop advanced software designed to use leadership-class computers for carrying out multiscale physics simulations to provide information vital to delivering a realistic integrated fusion simulation model with unprecedented physics fidelity. This multiphysics capability will be unprecedented in that in the current FES applications domain, the largest-scale codes are used to carry out first-principles simulations of mostly individual phenomena in realistic 3D geometry while the integrated models are much smaller-scale, lower-dimensionality codes with significant empirical elements used for modeling and designing experiments. The FSP is expected to be the most up-to-date embodiment of the theoretical and experimental understanding of magnetically confined thermonuclear plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing a reliable ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices on all relevant time and space scales. From a computational perspective, the fusion energy science application goal to produce high-fidelity, whole-device modeling capabilities will demand computing resources in the petascale range and beyond, together with the associated multicore algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative device involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics

  6. Second Annual AEC Scientific Computer Information Exhange Meeting. Proceedings of the technical program theme: computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Peskin,A.M.; Shimamoto, Y.

    1974-01-01

    The topic of computer graphics serves well to illustrate that AEC affiliated scientific computing installations are well represented in the forefront of computing science activities. The participant response to the technical program was overwhelming--both in number of contributions and quality of the work described. Session I, entitled Advanced Systems, contains presentations describing systems that contain features not generally found in graphics facilities. These features can be roughly classified as extensions of standard two-dimensional monochromatic imaging to higher dimensions including color and time as well as multidimensional metrics. Session II presents seven diverse applications ranging from high energy physics to medicine. Session III describes a number of important developments in establishing facilities, techniques and enhancements in the computer graphics area. Although an attempt was made to schedule as many of these worthwhile presentations as possible, it appeared impossible to do so given the scheduling constraints of the meeting. A number of prospective presenters 'came to the rescue' by graciously withdrawing from the sessions. Some of their abstracts have been included in the Proceedings.

  7. Brief review of topmost scientific results obtained in 2016 at the Joint Institute for Nuclear Research

    International Nuclear Information System (INIS)

    Kravchenko, E.I.; Sabaeva, E.V.

    2017-01-01

    This brief review presents the topmost scientific results obtained in 2016 at the Joint Institute for Nuclear Research in such fields as theoretical and experimental physics, radiation and radiobiological research, accelerators, information technology and computer physics. It also provides information about the publications by JINR staff members and activities carried out at the JINR University Centre in 2016. [ru

  8. Program for computing inhomogeneous coaxial resonators and accelerating systems of the U-400 and ITs-100 cyclotrons

    International Nuclear Information System (INIS)

    Gul'bekyan, G.G.; Ivanov, Eh.L.

    1987-01-01

    The ''Line'' computer code for computing inhomogeneous coaxial resonators is described. The results obtained for the resonators of the U-400 cyclotron made it possible to increase the energy of accelerated ions up to 27 MeV/nucl. The computations fot eh ITs-100 cyclic implantator gave the opportunity to build a compact design with a low value of consumed RF power

  9. FPGA hardware acceleration for high performance neutron transport computation based on agent methodology - 318

    International Nuclear Information System (INIS)

    Shanjie, Xiao; Tatjana, Jevremovic

    2010-01-01

    The accurate, detailed and 3D neutron transport analysis for Gen-IV reactors is still time-consuming regardless of advanced computational hardware available in developed countries. This paper introduces a new concept in addressing the computational time while persevering the detailed and accurate modeling; a specifically designed FPGA co-processor accelerates robust AGENT methodology for complex reactor geometries. For the first time this approach is applied to accelerate the neutronics analysis. The AGENT methodology solves neutron transport equation using the method of characteristics. The AGENT methodology performance was carefully analyzed before the hardware design based on the FPGA co-processor was adopted. The most time-consuming kernel part is then transplanted into the FPGA co-processor. The FPGA co-processor is designed with data flow-driven non von-Neumann architecture and has much higher efficiency than the conventional computer architecture. Details of the FPGA co-processor design are introduced and the design is benchmarked using two different examples. The advanced chip architecture helps the FPGA co-processor obtaining more than 20 times speed up with its working frequency much lower than the CPU frequency. (authors)

  10. Sparse Matrix-Vector Multiplication on Multicore and Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bell, Nathan [NVIDIA Research, Santa Clara, CA (United States); Choi, Jee Whan [Georgia Inst. of Technology, Atlanta, GA (United States); Garland, Michael [NVIDIA Research, Santa Clara, CA (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Vuduc, Richard [Georgia Inst. of Technology, Atlanta, GA (United States)

    2010-12-07

    This chapter consolidates recent work on the development of high performance multicore and accelerator-based implementations of sparse matrix-vector multiplication (SpMV). As an object of study, SpMV is an interesting computation for two key reasons. First, it appears widely in applications in scientific and engineering computing, financial and economic modeling, and information retrieval, among others, and is therefore of great practical interest. Secondly, it is both simple to describe but challenging to implement well, since its performance is limited by a variety of factors, including low computational intensity, potentially highly irregular memory access behavior, and a strong input dependence that be known only at run time. Thus, we believe SpMV is both practically important and provides important insights for understanding the algorithmic and implementation principles necessary to making effective use of state-of-the-art systems.

  11. Position Paper: Applying Machine Learning to Software Analysis to Achieve Trusted, Repeatable Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Prowell, Stacy J [ORNL; Symons, Christopher T [ORNL

    2015-01-01

    Producing trusted results from high-performance codes is essential for policy and has significant economic impact. We propose combining rigorous analytical methods with machine learning techniques to achieve the goal of repeatable, trustworthy scientific computing.

  12. The graphics future in scientific applications-trends and developments in computer graphics

    CERN Document Server

    Enderle, G

    1982-01-01

    Computer graphics methods and tools are being used to a great extent in scientific research. The future development in this area will be influenced both by new hardware developments and by software advances. On the hardware sector, the development of the raster technology will lead to the increased use of colour workstations with more local processing power. Colour hardcopy devices for creating plots, slides, or movies will be available at a lower price than today. The first real 3D-workstations will appear on the marketplace. One of the main activities on the software sector is the standardization of computer graphics systems, graphical files, and device interfaces. This will lead to more portable graphical application programs and to a common base for computer graphics education.

  13. Half a century of particle accelerators - 1950-2000

    International Nuclear Information System (INIS)

    Marin, P.

    2009-01-01

    In a lively historical account the author tells of the extraordinary progress made in accelerator physics since World War II. He focuses mainly on the history of French accelerators which evolved from small electrostatic accelerators purchased abroad to complex and powerful storage rings and colliders built by French engineers and physicists. He shows how these machines served not only particle physicists, but also researchers working with synchrotron light. He recalls how these two scientific communities with such different backgrounds learned how to work together. The author was an accelerator physicist, and a project leader who played a key role in storage ring R and D, as well as in accelerator construction and operation. He describes the international context of the period, and relates the discussions on scientific policy issues of the time. He tells us about the technical challenges to be overcome and discusses the question of maintaining the balance between national development and international involvement. A number of important yet unknown features of this scientific adventure are related. This short history also includes his thoughts about the gestation of large scientific instruments which, no doubt, will interest researchers involved in 'big science'

  14. Advanced Computational Models for Accelerator-Driven Systems

    International Nuclear Information System (INIS)

    Talamo, A.; Ravetto, P.; Gudowsk, W.

    2012-01-01

    In the nuclear engineering scientific community, Accelerator Driven Systems (ADSs) have been proposed and investigated for the transmutation of nuclear waste, especially plutonium and minor actinides. These fuels have a quite low effective delayed neutron fraction relative to uranium fuel, therefore the subcriticality of the core offers a unique safety feature with respect to critical reactors. The intrinsic safety of ADS allows the elimination of the operational control rods, hence the reactivity excess during burnup can be managed by the intensity of the proton beam, fuel shuffling, and eventually by burnable poisons. However, the intrinsic safety of a subcritical system does not guarantee that ADSs are immune from severe accidents (core melting), since the decay heat of an ADS is very similar to the one of a critical system. Normally, ADSs operate with an effective multiplication factor between 0.98 and 0.92, which means that the spallation neutron source contributes little to the neutron population. In addition, for 1 GeV incident protons and lead-bismuth target, about 50% of the spallation neutrons has energy below 1 MeV and only 15% of spallation neutrons has energies above 3 MeV. In the light of these remarks, the transmutation performances of ADS are very close to those of critical reactors.

  15. Accelerating Astronomy & Astrophysics in the New Era of Parallel Computing: GPUs, Phi and Cloud Computing

    Science.gov (United States)

    Ford, Eric B.; Dindar, Saleh; Peters, Jorg

    2015-08-01

    The realism of astrophysical simulations and statistical analyses of astronomical data are set by the available computational resources. Thus, astronomers and astrophysicists are constantly pushing the limits of computational capabilities. For decades, astronomers benefited from massive improvements in computational power that were driven primarily by increasing clock speeds and required relatively little attention to details of the computational hardware. For nearly a decade, increases in computational capabilities have come primarily from increasing the degree of parallelism, rather than increasing clock speeds. Further increases in computational capabilities will likely be led by many-core architectures such as Graphical Processing Units (GPUs) and Intel Xeon Phi. Successfully harnessing these new architectures, requires significantly more understanding of the hardware architecture, cache hierarchy, compiler capabilities and network network characteristics.I will provide an astronomer's overview of the opportunities and challenges provided by modern many-core architectures and elastic cloud computing. The primary goal is to help an astronomical audience understand what types of problems are likely to yield more than order of magnitude speed-ups and which problems are unlikely to parallelize sufficiently efficiently to be worth the development time and/or costs.I will draw on my experience leading a team in developing the Swarm-NG library for parallel integration of large ensembles of small n-body systems on GPUs, as well as several smaller software projects. I will share lessons learned from collaborating with computer scientists, including both technical and soft skills. Finally, I will discuss the challenges of training the next generation of astronomers to be proficient in this new era of high-performance computing, drawing on experience teaching a graduate class on High-Performance Scientific Computing for Astrophysics and organizing a 2014 advanced summer

  16. Research on GPU acceleration for Monte Carlo criticality calculation

    International Nuclear Information System (INIS)

    Xu, Q.; Yu, G.; Wang, K.

    2013-01-01

    The Monte Carlo (MC) neutron transport method can be naturally parallelized by multi-core architectures due to the dependency between particles during the simulation. The GPU+CPU heterogeneous parallel mode has become an increasingly popular way of parallelism in the field of scientific supercomputing. Thus, this work focuses on the GPU acceleration method for the Monte Carlo criticality simulation, as well as the computational efficiency that GPUs can bring. The 'neutron transport step' is introduced to increase the GPU thread occupancy. In order to test the sensitivity of the MC code's complexity, a 1D one-group code and a 3D multi-group general purpose code are respectively transplanted to GPUs, and the acceleration effects are compared. The result of numerical experiments shows considerable acceleration effect of the 'neutron transport step' strategy. However, the performance comparison between the 1D code and the 3D code indicates the poor scalability of MC codes on GPUs. (authors)

  17. Simulation of accelerator transmutation of long-lived nuclear wastes

    International Nuclear Information System (INIS)

    Wolff-Bacha Fabienne

    1997-01-01

    The incineration of minor actinides with a hybrid reactor (i.e. coupled with an accelerator) could reduce their radioactivity. The scientific tool used for simulations, the GEANT code implemented on a paralleled computer, has been confirmed initially on thin and thick targets and by simulation of a pressurized water reactor, a fast reactor like Superphenix, and a molten salt fast hybrid reactor 'ATP'. Simulating a thermal hybrid reactor seems to indicate the non-negligible presence of neutrons which diffuse back to the accelerator. In spite of simplifications, the simulation of a molten lead fast hybrid reactor (as the CERN Fast Energy Amplifier) might indicate difficulties in the radial power distribution in the core, the life time of the window and the activated air leak risk. Finally, we propose a thermoelectric compact hybrid reactor, PRAHE - small atomic board hybrid reactor - the principle of which allows a neutron coupling between the accelerator and the reactor. (author)

  18. Visualization in scientific computing

    National Research Council Canada - National Science Library

    Nielson, Gregory M; Shriver, Bruce D; Rosenblum, Lawrence J

    1990-01-01

    The purpose of this text is to provide a reference source to scientists, engineers, and students who are new to scientific visualization or who are interested in expanding their knowledge in this subject...

  19. Applied and numerical partial differential equations scientific computing in simulation, optimization and control in a multidisciplinary context

    CERN Document Server

    Glowinski, R; Kuznetsov, Y A; Periaux, Jacques; Neittaanmaki, Pekka; Pironneau, Olivier

    2010-01-01

    Standing at the intersection of mathematics and scientific computing, this collection of state-of-the-art papers in nonlinear PDEs examines their applications to subjects as diverse as dynamical systems, computational mechanics, and the mathematics of finance.

  20. SciDAC advances and applications in computational beam dynamics

    International Nuclear Information System (INIS)

    Ryne, R; Abell, D; Adelmann, A; Amundson, J; Bohn, C; Cary, J; Colella, P; Dechow, D; Decyk, V; Dragt, A; Gerber, R; Habib, S; Higdon, D; Katsouleas, T; Ma, K-L; McCorquodale, P; Mihalcea, D; Mitchell, C; Mori, W; Mottershead, C T; Neri, F; Pogorelov, I; Qiang, J; Samulyak, R; Serafini, D; Shalf, J; Siegerist, C; Spentzouris, P; Stoltz, P; Terzic, B; Venturini, M; Walstrom, P

    2005-01-01

    SciDAC has had a major impact on computational beam dynamics and the design of particle accelerators. Particle accelerators-which account for half of the facilities in the DOE Office of Science Facilities for the Future of Science 20 Year Outlook-are crucial for US scientific, industrial, and economic competitiveness. Thanks to SciDAC, accelerator design calculations that were once thought impossible are now carried routinely, and new challenging and important calculations are within reach. SciDAC accelerator modeling codes are being used to get the most science out of existing facilities, to produce optimal designs for future facilities, and to explore advanced accelerator concepts that may hold the key to qualitatively new ways of accelerating charged particle beams. In this paper we present highlights from the SciDAC Accelerator Science and Technology (AST) project Beam Dynamics focus area in regard to algorithm development, software development, and applications

  1. SciDAC Advances and Applications in Computational Beam Dynamics

    International Nuclear Information System (INIS)

    Ryne, R.; Abell, D.; Adelmann, A.; Amundson, J.; Bohn, C.; Cary, J.; Colella, P.; Dechow, D.; Decyk, V.; Dragt, A.; Gerber, R.; Habib, S.; Higdon, D.; Katsouleas, T.; Ma, K.-L.; McCorquodale, P.; Mihalcea, D.; Mitchell, C.; Mori, W.; Mottershead, C.T.; Neri, F.; Pogorelov, I.; Qiang, J.; Samulyak, R.; Serafini, D.; Shalf, J.; Siegerist, C.; Spentzouris, P.; Stoltz, P.; Terzic, B.; Venturini, M.; Walstrom, P.

    2005-01-01

    SciDAC has had a major impact on computational beam dynamics and the design of particle accelerators. Particle accelerators--which account for half of the facilities in the DOE Office of Science Facilities for the Future of Science 20 Year Outlook--are crucial for US scientific, industrial, and economic competitiveness. Thanks to SciDAC, accelerator design calculations that were once thought impossible are now carried routinely, and new challenging and important calculations are within reach. SciDAC accelerator modeling codes are being used to get the most science out of existing facilities, to produce optimal designs for future facilities, and to explore advanced accelerator concepts that may hold the key to qualitatively new ways of accelerating charged particle beams. In this poster we present highlights from the SciDAC Accelerator Science and Technology (AST) project Beam Dynamics focus area in regard to algorithm development, software development, and applications

  2. Compendium of Scientific Linacs

    Energy Technology Data Exchange (ETDEWEB)

    Clendenin, James E

    2003-05-16

    The International Committee supported the proposal of the Chairman of the XVIII International Linac Conference to issue a new Compendium of linear accelerators. The last one was published in 1976. The Local Organizing Committee of Linac96 decided to set up a sub-committee for this purpose. Contrary to the catalogues of the High Energy Accelerators which compile accelerators with energies above 1 GeV, we have not defined a specific limit in energy. Microtrons and cyclotrons are not in this compendium. Also data from thousands of medical and industrial linacs has not been collected. Therefore, only scientific linacs are listed in the present compendium. Each linac found in this research and involved in a physics context was considered. It could be used, for example, either as an injector for high energy accelerators, or in nuclear physics, materials physics, free electron lasers or synchrotron light machines. Linear accelerators are developed in three continents only: America, Asia, and Europe. This geographical distribution is kept as a basis. The compendium contains the parameters and status of scientific linacs. Most of these linacs are operational. However, many facilities under construction or design studies are also included. A special mention has been made at the end for the studies of future linear colliders.

  3. Designing scientific applications on GPUs

    CERN Document Server

    Couturier, Raphael

    2013-01-01

    Many of today's complex scientific applications now require a vast amount of computational power. General purpose graphics processing units (GPGPUs) enable researchers in a variety of fields to benefit from the computational power of all the cores available inside graphics cards.Understand the Benefits of Using GPUs for Many Scientific ApplicationsDesigning Scientific Applications on GPUs shows you how to use GPUs for applications in diverse scientific fields, from physics and mathematics to computer science. The book explains the methods necessary for designing or porting your scientific appl

  4. The nature of the (visualization) game: Challenges and opportunities from computational geophysics

    Science.gov (United States)

    Kellogg, L. H.

    2016-12-01

    As the geosciences enters the era of big data, modeling and visualization become increasingly vital tools for discovery, understanding, education, and communication. Here, we focus on modeling and visualization of the structure and dynamics of the Earth's surface and interior. The past decade has seen accelerated data acquisition, including higher resolution imaging and modeling of Earth's deep interior, complex models of geodynamics, and high resolution topographic imaging of the changing surface, with an associated acceleration of computational modeling through better scientific software, increased computing capability, and the use of innovative methods of scientific visualization. The role of modeling is to describe a system, answer scientific questions, and test hypotheses; the term "model" encompasses mathematical models, computational models, physical models, conceptual models, statistical models, and visual models of a structure or process. These different uses of the term require thoughtful communication to avoid confusion. Scientific visualization is integral to every aspect of modeling. Not merely a means of communicating results, the best uses of visualization enable scientists to interact with their data, revealing the characteristics of the data and models to enable better interpretation and inform the direction of future investigation. Innovative immersive technologies like virtual reality, augmented reality, and remote collaboration techniques, are being adapted more widely and are a magnet for students. Time-varying or transient phenomena are especially challenging to model and to visualize; researchers and students may need to investigate the role of initial conditions in driving phenomena, while nonlinearities in the governing equations of many Earth systems make the computations and resulting visualization especially challenging. Training students how to use, design, build, and interpret scientific modeling and visualization tools prepares them

  5. GPU-accelerated Lattice Boltzmann method for anatomical extraction in patient-specific computational hemodynamics

    Science.gov (United States)

    Yu, H.; Wang, Z.; Zhang, C.; Chen, N.; Zhao, Y.; Sawchuk, A. P.; Dalsing, M. C.; Teague, S. D.; Cheng, Y.

    2014-11-01

    Existing research of patient-specific computational hemodynamics (PSCH) heavily relies on software for anatomical extraction of blood arteries. Data reconstruction and mesh generation have to be done using existing commercial software due to the gap between medical image processing and CFD, which increases computation burden and introduces inaccuracy during data transformation thus limits the medical applications of PSCH. We use lattice Boltzmann method (LBM) to solve the level-set equation over an Eulerian distance field and implicitly and dynamically segment the artery surfaces from radiological CT/MRI imaging data. The segments seamlessly feed to the LBM based CFD computation of PSCH thus explicit mesh construction and extra data management are avoided. The LBM is ideally suited for GPU (graphic processing unit)-based parallel computing. The parallel acceleration over GPU achieves excellent performance in PSCH computation. An application study will be presented which segments an aortic artery from a chest CT dataset and models PSCH of the segmented artery.

  6. Rigorous bounds on survival times in circular accelerators and efficient computation of fringe-field transfer maps

    International Nuclear Information System (INIS)

    Hoffstaetter, G.H.

    1994-12-01

    Analyzing stability of particle motion in storage rings contributes to the general field of stability analysis in weakly nonlinear motion. A method which we call pseudo invariant estimation (PIE) is used to compute lower bounds on the survival time in circular accelerators. The pseudeo invariants needed for this approach are computed via nonlinear perturbative normal form theory and the required global maxima of the highly complicated multivariate functions could only be rigorously bound with an extension of interval arithmetic. The bounds on the survival times are large enough to the relevant; the same is true for the lower bounds on dynamical aperatures, which can be computed. The PIE method can lead to novel design criteria with the objective of maximizing the survival time. A major effort in the direction of rigourous predictions only makes sense if accurate models of accelerators are available. Fringe fields often have a significant influence on optical properties, but the computation of fringe-field maps by DA based integration is slower by several orders of magnitude than DA evaluation of the propagator for main-field maps. A novel computation of fringe-field effects called symplectic scaling (SYSCA) is introduced. It exploits the advantages of Lie transformations, generating functions, and scaling properties and is extremely accurate. The computation of fringe-field maps is typically made nearly two orders of magnitude faster. (orig.)

  7. Sign use and cognition in automated scientific discovery: are computers only special kinds of signs?

    Science.gov (United States)

    Giza, Piotr

    2018-04-01

    James Fetzer criticizes the computational paradigm, prevailing in cognitive science by questioning, what he takes to be, its most elementary ingredient: that cognition is computation across representations. He argues that if cognition is taken to be a purposive, meaningful, algorithmic problem solving activity, then computers are incapable of cognition. Instead, they appear to be signs of a special kind, that can facilitate computation. He proposes the conception of minds as semiotic systems as an alternative paradigm for understanding mental phenomena, one that seems to overcome the difficulties of computationalism. Now, I argue, that with computer systems dealing with scientific discovery, the matter is not so simple as that. The alleged superiority of humans using signs to stand for something other over computers being merely "physical symbol systems" or "automatic formal systems" is only easy to establish in everyday life, but becomes far from obvious when scientific discovery is at stake. In science, as opposed to everyday life, the meaning of symbols is, apart from very low-level experimental investigations, defined implicitly by the way the symbols are used in explanatory theories or experimental laws relevant to the field, and in consequence, human and machine discoverers are much more on a par. Moreover, the great practical success of the genetic programming method and recent attempts to apply it to automatic generation of cognitive theories seem to show, that computer systems are capable of very efficient problem solving activity in science, which is neither purposive nor meaningful, nor algorithmic. This, I think, undermines Fetzer's argument that computer systems are incapable of cognition because computation across representations is bound to be a purposive, meaningful, algorithmic problem solving activity.

  8. Accelerating Inspire

    CERN Document Server

    AUTHOR|(CDS)2266999

    2017-01-01

    CERN has been involved in the dissemination of scientific results since its early days and has continuously updated the distribution channels. Currently, Inspire hosts catalogues of articles, authors, institutions, conferences, jobs, experiments, journals and more. Successful orientation among this amount of data requires comprehensive linking between the content. Inspire has lacked a system for linking experiments and articles together based on which accelerator they were conducted at. The purpose of this project has been to create such a system. Records for 156 accelerators were created and all 2913 experiments on Inspire were given corresponding MARC tags. Records of 18404 accelerator physics related bibliographic entries were also tagged with corresponding accelerator tags. Finally, as a part of the endeavour to broaden CERN's presence on Wikipedia, existing Wikipedia articles of accelerators were updated with short descriptions and links to Inspire. In total, 86 Wikipedia articles were updated. This repo...

  9. Distribution of computer functionality for accelerator control at the Brookhaven AGS

    International Nuclear Information System (INIS)

    Stevens, A.; Clifford, T.; Frankel, R.

    1985-01-01

    A set of physical and functional system components and their interconnection protocols have been established for all controls work at the AGS. Portions of these designs were tested as part of enhanced operation of the AGS as a source of polarized protons and additional segments will be implemented during the continuing construction efforts which are adding heavy ion capability to our facility. Included in our efforts are the following computer and control system elements: a broad band local area network, which embodies MODEMS; transmission systems and branch interface units; a hierarchical layer, which performs certain data base and watchdog/alarm functions; a group of work station processors (Apollo's) which perform the function of traditional minicomputer host(s) and a layer, which provides both real time control and standardization functions for accelerator devices and instrumentation. Data base and other accelerator functionality is assigned to the most correct level within our network for both real time performance, long-term utility, and orderly growth

  10. Construction of Blaze at the University of Illinois at Chicago: A Shared, High-Performance, Visual Computer for Next-Generation Cyberinfrastructure-Accelerated Scientific, Engineering, Medical and Public Policy Research

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Maxine D. [Acting Director, EVL; Leigh, Jason [PI

    2014-02-17

    The Blaze high-performance visual computing system serves the high-performance computing research and education needs of University of Illinois at Chicago (UIC). Blaze consists of a state-of-the-art, networked, computer cluster and ultra-high-resolution visualization system called CAVE2(TM) that is currently not available anywhere in Illinois. This system is connected via a high-speed 100-Gigabit network to the State of Illinois' I-WIRE optical network, as well as to national and international high speed networks, such as the Internet2, and the Global Lambda Integrated Facility. This enables Blaze to serve as an on-ramp to national cyberinfrastructure, such as the National Science Foundation’s Blue Waters petascale computer at the National Center for Supercomputing Applications at the University of Illinois at Chicago and the Department of Energy’s Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. DOE award # DE-SC005067, leveraged with NSF award #CNS-0959053 for “Development of the Next-Generation CAVE Virtual Environment (NG-CAVE),” enabled us to create a first-of-its-kind high-performance visual computing system. The UIC Electronic Visualization Laboratory (EVL) worked with two U.S. companies to advance their commercial products and maintain U.S. leadership in the global information technology economy. New applications are being enabled with the CAVE2/Blaze visual computing system that is advancing scientific research and education in the U.S. and globally, and help train the next-generation workforce.

  11. Accelerating phylogenetics computing on the desktop: experiments with executing UPGMA in programmable logic.

    Science.gov (United States)

    Davis, J P; Akella, S; Waddell, P H

    2004-01-01

    Having greater computational power on the desktop for processing taxa data sets has been a dream of biologists/statisticians involved in phylogenetics data analysis. Many existing algorithms have been highly optimized-one example being Felsenstein's PHYLIP code, written in C, for UPGMA and neighbor joining algorithms. However, the ability to process more than a few tens of taxa in a reasonable amount of time using conventional computers has not yielded a satisfactory speedup in data processing, making it difficult for phylogenetics practitioners to quickly explore data sets-such as might be done from a laptop computer. We discuss the application of custom computing techniques to phylogenetics. In particular, we apply this technology to speed up UPGMA algorithm execution by a factor of a hundred, against that of PHYLIP code running on the same PC. We report on these experiments and discuss how custom computing techniques can be used to not only accelerate phylogenetics algorithm performance on the desktop, but also on larger, high-performance computing engines, thus enabling the high-speed processing of data sets involving thousands of taxa.

  12. Fixed-Target Electron Accelerators

    International Nuclear Information System (INIS)

    Brooks, William K.

    2001-01-01

    A tremendous amount of scientific insight has been garnered over the past half-century by using particle accelerators to study physical systems of sub-atomic dimensions. These giant instruments begin with particles at rest, then greatly increase their energy of motion, forming a narrow trajectory or beam of particles. In fixed-target accelerators, the particle beam impacts upon a stationary sample or target which contains or produces the sub-atomic system being studied. This is in distinction to colliders, where two beams are produced and are steered into each other so that their constituent particles can collide. The acceleration process always relies on the particle being accelerated having an electric charge; however, both the details of producing the beam and the classes of scientific investigations possible vary widely with the specific type of particle being accelerated. This article discusses fixed-target accelerators which produce beams of electrons, the lightest charged particle. As detailed in the report, the beam energy has a close connection with the size of the physical system studied. Here a useful unit of energy is a GeV, i.e., a giga electron-volt. (ne GeV, the energy an electron would have if accelerated through a billion volts, is equal to 1.6 x 10 -10 joules.) To study systems on a distance scale much smaller than an atomic nucleus requires beam energies ranging from a few GeV up to hundreds of GeV and more

  13. Application of the opportunities of tool system 'CUDA' for graphic processors programming in scientific and technical calculation tasks

    International Nuclear Information System (INIS)

    Dudnik, V.A.; Kudryavtsev, V.I.; Sereda, T.M.; Us, S.A.; Shestakov, M.V.

    2009-01-01

    The opportunities of technology CUDA (Compute Unified Device Architecture - the unified hardware-software decision for parallel calculations on GPU)of the company NVIDIA were described. The basic differences of the programming language 'C' for GPU from 'usual' language 'C' were selected. The examples of CUDA usage for acceleration of development of applications and realization of algorithms of scientific and technical calculations were given which are carried out by the means of graphic processors (GPGPU) of accelerators GeForce of the eighth generation. The recommendations on optimization of the programs using GPU were resulted.

  14. Parallel computation for solving the tridiagonal linear system of equations

    International Nuclear Information System (INIS)

    Ishiguro, Misako; Harada, Hiroo; Fujii, Minoru; Fujimura, Toichiro; Nakamura, Yasuhiro; Nanba, Katsumi.

    1981-09-01

    Recently, applications of parallel computation for scientific calculations have increased from the need of the high speed calculation of large scale programs. At the JAERI computing center, an array processor FACOM 230-75 APU has installed to study the applicability of parallel computation for nuclear codes. We made some numerical experiments by using the APU on the methods of solution of tridiagonal linear equation which is an important problem in scientific calculations. Referring to the recent papers with parallel methods, we investigate eight ones. These are Gauss elimination method, Parallel Gauss method, Accelerated parallel Gauss method, Jacobi method, Recursive doubling method, Cyclic reduction method, Chebyshev iteration method, and Conjugate gradient method. The computing time and accuracy were compared among the methods on the basis of the numerical experiments. As the result, it is found that the Cyclic reduction method is best both in computing time and accuracy and the Gauss elimination method is the second one. (author)

  15. Physics and technical development of accelerators

    International Nuclear Information System (INIS)

    2000-03-01

    About 90 registered participants delivered more than 40 scientific papers. A great part of these presentations were of general interest about running projects such as CIME accelerator at Ganil, IPHI (high intensity proton injector), ESRF (European source of synchrotron radiation), LHC (large hadron collider), ELYSE accelerator at Orsay, AIRIX, and VIVITRON tandem accelerator. Other presentations highlighted the latest technological developments of accelerator components: superconducting cavities, power klystrons, high current injectors..

  16. 2016 Accelerators meeting

    International Nuclear Information System (INIS)

    Spiro, Michel; Revol, Jean-Luc; Biarrotte, Jean-Luc; Napoly, Olivier; Jardin, Pascal; Chautard, Frederic; Thomas, Jean Charles; Petit, Eric

    2016-09-01

    The Accelerators meeting is organised every two years by the Accelerators division of the French Society of Physics (SFP). It brings together about 50 participants during a one-day meeting. The morning sessions are devoted to scientific presentations while the afternoon is dedicated to technical visits of facilities. This document brings together the available presentations (slides): 1 - Presentation of the Ganil - Grand accelerateur national d'ions lourds/Big national heavy-ion accelerator, Caen (Jardin, Pascal); 2 - Presentation of the Accelerators division of the French Society of Physics (Revol, Jean-Luc); 3 - Forward-looking and Prospective view (Napoly, Olivier); 4 - Accelerators at the National Institute of Nuclear and particle physics, situation, Forward-looking and Prospective view (Biarrotte, Jean-Luc); 5 - GANIL-SPIRAL2, missions and goals (Thomas, Jean Charles); 6 - The SPIRAL2 project (Petit, Eric)

  17. Explicit integration with GPU acceleration for large kinetic networks

    International Nuclear Information System (INIS)

    Brock, Benjamin; Belt, Andrew; Billings, Jay Jay; Guidry, Mike

    2015-01-01

    We demonstrate the first implementation of recently-developed fast explicit kinetic integration algorithms on modern graphics processing unit (GPU) accelerators. Taking as a generic test case a Type Ia supernova explosion with an extremely stiff thermonuclear network having 150 isotopic species and 1604 reactions coupled to hydrodynamics using operator splitting, we demonstrate the capability to solve of order 100 realistic kinetic networks in parallel in the same time that standard implicit methods can solve a single such network on a CPU. This orders-of-magnitude decrease in computation time for solving systems of realistic kinetic networks implies that important coupled, multiphysics problems in various scientific and technical fields that were intractable, or could be simulated only with highly schematic kinetic networks, are now computationally feasible.

  18. A new 3-D integral code for computation of accelerator magnets

    International Nuclear Information System (INIS)

    Turner, L.R.; Kettunen, L.

    1991-01-01

    For computing accelerator magnets, integral codes have several advantages over finite element codes; far-field boundaries are treated automatically, and computed field in the bore region satisfy Maxwell's equations exactly. A new integral code employing edge elements rather than nodal elements has overcome the difficulties associated with earlier integral codes. By the use of field integrals (potential differences) as solution variables, the number of unknowns is reduced to one less than the number of nodes. Two examples, a hollow iron sphere and the dipole magnet of Advanced Photon Source injector synchrotron, show the capability of the code. The CPU time requirements are comparable to those of three-dimensional (3-D) finite-element codes. Experiments show that in practice it can realize much of the potential CPU time saving that parallel processing makes possible. 8 refs., 4 figs., 1 tab

  19. Future Accelerator Challenges in Support of High-Energy Physics

    International Nuclear Information System (INIS)

    Zisman, Michael S.; Zisman, M.S.

    2008-01-01

    Historically, progress in high-energy physics has largely been determined by development of more capable particle accelerators. This trend continues today with the imminent commissioning of the Large Hadron Collider at CERN, and the worldwide development effort toward the International Linear Collider. Looking ahead, there are two scientific areas ripe for further exploration--the energy frontier and the precision frontier. To explore the energy frontier, two approaches toward multi-TeV beams are being studied, an electron-positron linear collider based on a novel two-beam powering system (CLIC), and a Muon Collider. Work on the precision frontier involves accelerators with very high intensity, including a Super-BFactory and a muon-based Neutrino Factory. Without question, one of the most promising approaches is the development of muon-beam accelerators. Such machines have very high scientific potential, and would substantially advance the state-of-the-art in accelerator design. The challenges of the new generation of accelerators, and how these can be accommodated in the accelerator design, are described. To reap their scientific benefits, all of these frontier accelerators will require sophisticated instrumentation to characterize the beam and control it with unprecedented precision

  20. Future Accelerator Challenges in Support of High-Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Zisman, Michael S.; Zisman, M.S.

    2008-05-03

    Historically, progress in high-energy physics has largely been determined by development of more capable particle accelerators. This trend continues today with the imminent commissioning of the Large Hadron Collider at CERN, and the worldwide development effort toward the International Linear Collider. Looking ahead, there are two scientific areas ripe for further exploration--the energy frontier and the precision frontier. To explore the energy frontier, two approaches toward multi-TeV beams are being studied, an electron-positron linear collider based on a novel two-beam powering system (CLIC), and a Muon Collider. Work on the precision frontier involves accelerators with very high intensity, including a Super-BFactory and a muon-based Neutrino Factory. Without question, one of the most promising approaches is the development of muon-beam accelerators. Such machines have very high scientific potential, and would substantially advance the state-of-the-art in accelerator design. The challenges of the new generation of accelerators, and how these can be accommodated in the accelerator design, are described. To reap their scientific benefits, all of these frontier accelerators will require sophisticated instrumentation to characterize the beam and control it with unprecedented precision.

  1. Operating experience with a new accelerator control system based upon microprocessors

    Energy Technology Data Exchange (ETDEWEB)

    Magyary, S.; Lancaster, H.; Selph, F.; Fahmie, M.; Timossi, C.; Glatz, J.; Ritchie, A.; Hinkson, J.; Benjegerdes, R.; Brodzik, D.

    1981-03-01

    This paper describes the design and operating experience with a high performance control system tailored to the requirements of the SuperHILAC accelerator. A large number (20) of the latest 16-bit microcomputer boards are used in a parallel-distributed manner to get a high system bandwidth. Because of the high bandwidth, software costs and complexity are significantly reduced. The system by its very nature and design is easily upgraded and repaired. Dynamically assigned and labeled knobs, together with touch-panels, allow a flexible and efficient operator interface. An X-Y vector graphics system provides for display and labeling of real-time signals as well as general plotting functions. This control system allows attachment of a powerful auxiliary computer for scientific processing with access to accelerator parameters.

  2. Continuous Analog of Accelerated OS-EM Algorithm for Computed Tomography

    Directory of Open Access Journals (Sweden)

    Kiyoko Tateishi

    2017-01-01

    Full Text Available The maximum-likelihood expectation-maximization (ML-EM algorithm is used for an iterative image reconstruction (IIR method and performs well with respect to the inverse problem as cross-entropy minimization in computed tomography. For accelerating the convergence rate of the ML-EM, the ordered-subsets expectation-maximization (OS-EM with a power factor is effective. In this paper, we propose a continuous analog to the power-based accelerated OS-EM algorithm. The continuous-time image reconstruction (CIR system is described by nonlinear differential equations with piecewise smooth vector fields by a cyclic switching process. A numerical discretization of the differential equation by using the geometric multiplicative first-order expansion of the nonlinear vector field leads to an exact equivalent iterative formula of the power-based OS-EM. The convergence of nonnegatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem for consistent inverse problems. We illustrate through numerical experiments that the convergence characteristics of the continuous system have the highest quality compared with that of discretization methods. We clarify how important the discretization method approximates the solution of the CIR to design a better IIR method.

  3. On the application of low-energy electrostatic accelerators

    International Nuclear Information System (INIS)

    Petkov, I.; Khristov, H.

    1982-01-01

    The scientific and applied problems which can be solved by small electrostatic accelerators have been reviewed. Problems connected with thermonuclear fusion, nuclear astrophysics, element and isotope analysis, and detector calibration have been considered, as well as applications of beams of accelerated microparticles of picogram and nanogram masses. Some particular research examples are presented, and the corresponding experimental setup is descibed. The problems pointed out are of a considerable scientific and practical interest for the application of the 2 MV-electrostatic accelerator which is being developed in INRNE, Sofia. (authors)

  4. Applications of industrial computed tomography at Los Alamos Scientific Laboratory

    International Nuclear Information System (INIS)

    Kruger, R.P.; Morris, R.A.; Wecksung, G.W.

    1980-01-01

    A research and development program was begun three years ago at the Los Alamos Scientific Laboratory (LASL) to study nonmedical applications of computed tomography. This program had several goals. The first goal was to develop the necessary reconstruction algorithms to accurately reconstruct cross sections of nonmedical industrial objects. The second goal was to be able to perform extensive tomographic simulations to determine the efficacy of tomographic reconstruction with a variety of hardware configurations. The final goal was to construct an inexpensive industrial prototype scanner with a high degree of design flexibility. The implementation of these program goals is described

  5. Brief review of topmost scientific results obtained in 2015 at the Joint Institute for Nuclear Research

    International Nuclear Information System (INIS)

    Sabaeva, E.V.; Krupko, E.I.

    2016-01-01

    This brief review presents the topmost scientific results obtained in 2015 at the Joint Institute for Nuclear Research in such fields as theoretical and experimental physics, radiation and radiobiological research, accelerators, information technology and computer physics. It also provides information about the publications by JINR staff members, awards given to JINR scientists, and activities carried out at the JINR University Centre in 2015. [ru

  6. Brief review of topmost scientific results obtained in 2013 at the Joint Institute for Nuclear Research

    International Nuclear Information System (INIS)

    Sabaeva, E.V.; Kravchenko, E.I.

    2014-01-01

    This brief review presents the topmost scientific results obtained in 2013 at the Joint Institute for Nuclear Research in such areas as theoretical physics, experimental physics, radiation and radiobiological research, accelerators, information technology and computer physics. It also provides information on the number of publications by JINR staff members, awards given to JINR scientists, and activities carried out at the JINR University Centre in 2013.

  7. Brief review of topmost scientific results obtained in 2014 at the Joint Institute for Nuclear Research

    International Nuclear Information System (INIS)

    Bulatova, V.V.; Sabaeva, E.V.

    2015-01-01

    This brief review presents the topmost scientific results obtained in 2014 at the Joint Institute for Nuclear Research in such fields as theoretical and experimental physics, radiation and radiobiological research, accelerators, information technology and computer physics. It also provides information about the publications by JINR staff members, patents for inventions, awards given to JINR scientists, and activities carried out at the JINR University Centre in 2014. [ru

  8. Scientific Services on the Cloud

    Science.gov (United States)

    Chapman, David; Joshi, Karuna P.; Yesha, Yelena; Halem, Milt; Yesha, Yaacov; Nguyen, Phuong

    Scientific Computing was one of the first every applications for parallel and distributed computation. To this date, scientific applications remain some of the most compute intensive, and have inspired creation of petaflop compute infrastructure such as the Oak Ridge Jaguar and Los Alamos RoadRunner. Large dedicated hardware infrastructure has become both a blessing and a curse to the scientific community. Scientists are interested in cloud computing for much the same reason as businesses and other professionals. The hardware is provided, maintained, and administrated by a third party. Software abstraction and virtualization provide reliability, and fault tolerance. Graduated fees allow for multi-scale prototyping and execution. Cloud computing resources are only a few clicks away, and by far the easiest high performance distributed platform to gain access to. There may still be dedicated infrastructure for ultra-scale science, but the cloud can easily play a major part of the scientific computing initiative.

  9. Improving Quality of Service and Reducing Power Consumption with WAN accelerator in Cloud Computing Environments

    OpenAIRE

    Shin-ichi Kuribayashi

    2013-01-01

    The widespread use of cloud computing services is expected to deteriorate a Quality of Service and toincrease the power consumption of ICT devices, since the distance to a server becomes longer than before. Migration of virtual machines over a wide area can solve many problems such as load balancing and power saving in cloud computing environments. This paper proposes to dynamically apply WAN accelerator within the network when a virtual machine is moved to a distant center, in order to preve...

  10. Quantum optical device accelerating dynamic programming

    OpenAIRE

    Grigoriev, D.; Kazakov, A.; Vakulenko, S.

    2005-01-01

    In this paper we discuss analogue computers based on quantum optical systems accelerating dynamic programming for some computational problems. These computers, at least in principle, can be realized by actually existing devices. We estimate an acceleration in resolving of some NP-hard problems that can be obtained in such a way versus deterministic computers

  11. Accelerating Science Driven System Design With RAMP

    Energy Technology Data Exchange (ETDEWEB)

    Wawrzynek, John [Univ. of California, Berkeley, CA (United States)

    2015-05-01

    Researchers from UC Berkeley, in collaboration with the Lawrence Berkeley National Lab, are engaged in developing an Infrastructure for Synthesis with Integrated Simulation (ISIS). The ISIS Project was a cooperative effort for “application-driven hardware design” that engages application scientists in the early parts of the hardware design process for future generation supercomputing systems. This project served to foster development of computing systems that are better tuned to the application requirements of demanding scientific applications and result in more cost-effective and efficient HPC system designs. In order to overcome long conventional design-cycle times, we leveraged reconfigurable devices to aid in the design of high-efficiency systems, including conventional multi- and many-core systems. The resulting system emulation/prototyping environment, in conjunction with the appropriate intermediate abstractions, provided both a convenient user programming experience and retained flexibility, and thus efficiency, of a reconfigurable platform. We initially targeted the Berkeley RAMP system (Research Accelerator for Multiple Processors) as that hardware emulation environment to facilitate and ultimately accelerate the iterative process of science-driven system design. Our goal was to develop and demonstrate a design methodology for domain-optimized computer system architectures. The tangible outcome is a methodology and tools for rapid prototyping and design-space exploration, leading to highly optimized and efficient HPC systems.

  12. Micro-computer based control system and its software for carrying out the sequential acceleration on SMCAMS

    International Nuclear Information System (INIS)

    Li Deming

    2001-01-01

    Micro-computer based control system and its software for carrying out the sequential acceleration on SMCAMS is described. Also, the establishment of the 14 C particle measuring device and the improvement of the original power supply system are described

  13. Collective ion acceleration

    International Nuclear Information System (INIS)

    Godfrey, B.B.; Faehl, R.J.; Newberger, B.S.; Shanahan, W.R.; Thode, L.E.

    1977-01-01

    Progress achieved in the understanding and development of collective ion acceleration is presented. Extensive analytic and computational studies of slow cyclotron wave growth on an electron beam in a helix amplifier were performed. Research included precise determination of linear coupling between beam and helix, suppression of undesired transients and end effects, and two-dimensional simulations of wave growth in physically realizable systems. Electrostatic well depths produced exceed requirements for the Autoresonant Ion Acceleration feasibility experiment. Acceleration of test ions to modest energies in the troughs of such waves was also demonstrated. Smaller efforts were devoted to alternative acceleration mechanisms. Langmuir wave phase velocity in Converging Guide Acceleration was calculated as a function of the ratio of electron beam current to space-charge limiting current. A new collective acceleration approach, in which cyclotron wave phase velocity is varied by modulation of electron beam voltage, is proposed. Acceleration by traveling Virtual Cathode or Localized Pinch was considered, but appears less promising. In support of this research, fundamental investigations of beam propagation in evacuated waveguides, of nonneutral beam linear eigenmodes, and of beam stability were carried out. Several computer programs were developed or enhanced. Plans for future work are discussed

  14. Investigation of Storage Options for Scientific Computing on Grid and Cloud Facilities

    International Nuclear Information System (INIS)

    Garzoglio, Gabriele

    2012-01-01

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storage server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on “bare metal” nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.

  15. Sao Paulo pelletron accelerator: fortieth anniversary

    International Nuclear Information System (INIS)

    Pereira, Dirceu

    2012-01-01

    Full text: This year the 8MV Sao Paulo Pelletron tandem accelerator completes 40 years . This electrostatic accelerator was installed in the Sao Paulo University in 1972 , and it was the first of this model constructed the National Electrostatic Corporation with several innovations particularly with respect to the new concept of accelerator tube and the charge system. In the talk will be discussed the performance of the accelerator during all these years and the main result scientific results. (author)

  16. Sao Paulo pelletron accelerator: fortieth anniversary

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Dirceu [Universidade de Sao Paulo (IF/USP), SP (Brazil). Inst. de Fisica

    2012-07-01

    Full text: This year the 8MV Sao Paulo Pelletron tandem accelerator completes 40 years . This electrostatic accelerator was installed in the Sao Paulo University in 1972 , and it was the first of this model constructed the National Electrostatic Corporation with several innovations particularly with respect to the new concept of accelerator tube and the charge system. In the talk will be discussed the performance of the accelerator during all these years and the main result scientific results. (author)

  17. Acceleration of the nodal program FERM

    International Nuclear Information System (INIS)

    Nakata, H.

    1985-01-01

    Acceleration of the nodal FERM was tried by three acceleration schemes. Results of the calculations showed the best acceleration with the Tchebyshev method where the savings in the computing time were of the order of 50%. Acceleration with the Assymptotic Source Extrapoltation Method and with the Coarse-Mesh Rebalancing Method did not result in any improvement on the global computational time, although a reduction in the number of outer iterations was observed. (Author) [pt

  18. The radar signature of revolution objects in scientific computing

    International Nuclear Information System (INIS)

    Bonnemason, P.; Le Martret, R.; Scheurer, B.; Stupfel, B.

    1990-12-01

    This work is motivated by the study of stealthy (or discrete) revolution objects vis-a-vis a radar. Efficient algorithms, specific numerical methods and two original industrial software (SHF 89 and SHF C) have been developed. These are reliable tools in intensive scientific computing. In particular, they have enabled the precise numerical modeling of complex objects, of very general forms, in the field of high frequencies and a thorough understanding of the physics of the problems involved. The purpose of this note is a general description of the work and its context, which is illustrated by examples of numerical applications (presented in Appendix 4). The technical aspects are detailed in reports and publications (a list is attached to this note) [fr

  19. Advanced scientific computational methods and their applications to nuclear technologies. (3) Introduction of continuum simulation methods and their applications (3)

    International Nuclear Information System (INIS)

    Satake, Shin-ichi; Kunugi, Tomoaki

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the third issue showing the introduction of continuum simulation methods and their applications. Spectral methods and multi-interface calculation methods in fluid dynamics are reviewed. (T. Tanaka)

  20. Instrumentation for Scientific Computing in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics.

    Science.gov (United States)

    1987-10-01

    include Security Classification) Instrumentation for scientific computing in neural networks, information science, artificial intelligence, and...instrumentation grant to purchase equipment for support of research in neural networks, information science, artificail intellignece , and applied mathematics...in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics Contract AFOSR 86-0282 Principal Investigator: Stephen

  1. Computer simulation, rhetoric, and the scientific imagination how virtual evidence shapes science in the making and in the news

    CERN Document Server

    Roundtree, Aimee Kendall

    2013-01-01

    Computer simulations help advance climatology, astrophysics, and other scientific disciplines. They are also at the crux of several high-profile cases of science in the news. How do simulation scientists, with little or no direct observations, make decisions about what to represent? What is the nature of simulated evidence, and how do we evaluate its strength? Aimee Kendall Roundtree suggests answers in Computer Simulation, Rhetoric, and the Scientific Imagination. She interprets simulations in the sciences by uncovering the argumentative strategies that underpin the production and disseminati

  2. Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research

    Energy Technology Data Exchange (ETDEWEB)

    Brown, D L

    2009-05-01

    Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex

  3. Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research

    International Nuclear Information System (INIS)

    Brown, D.L.

    2009-01-01

    Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex networked systems

  4. Studying Scientific Discovery by Computer Simulation.

    Science.gov (United States)

    1983-03-30

    Mendel’s laws of inheritance, the law of Gay- Lussac for gaseous reactions, tile law of Dulong and Petit, the derivation of atomic weights by Avogadro...neceseary mid identify by block number) scientific discovery -ittri sic properties physical laws extensive terms data-driven heuristics intensive...terms theory-driven heuristics conservation laws 20. ABSTRACT (Continue on revere. side It necessary and identify by block number) Scientific discovery

  5. Scientific Grand Challenges: Challenges in Climate Change Science and the Role of Computing at the Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.; Johnson, Gary M.; Washington, Warren M.

    2009-07-02

    The U.S. Department of Energy (DOE) Office of Biological and Environmental Research (BER) in partnership with the Office of Advanced Scientific Computing Research (ASCR) held a workshop on the challenges in climate change science and the role of computing at the extreme scale, November 6-7, 2008, in Bethesda, Maryland. At the workshop, participants identified the scientific challenges facing the field of climate science and outlined the research directions of highest priority that should be pursued to meet these challenges. Representatives from the national and international climate change research community as well as representatives from the high-performance computing community attended the workshop. This group represented a broad mix of expertise. Of the 99 participants, 6 were from international institutions. Before the workshop, each of the four panels prepared a white paper, which provided the starting place for the workshop discussions. These four panels of workshop attendees devoted to their efforts the following themes: Model Development and Integrated Assessment; Algorithms and Computational Environment; Decadal Predictability and Prediction; Data, Visualization, and Computing Productivity. The recommendations of the panels are summarized in the body of this report.

  6. Acceleration of the FERM nodal program

    International Nuclear Information System (INIS)

    Nakata, H.

    1985-01-01

    It was tested three acceleration methods trying to reduce the number of outer iterations in the FERM nodal program. The results obtained indicated that the Chebychev polynomial acceleration method with variable degree results in a economy of 50% in the computer time. Otherwise, the acceleration method by source asymptotic extrapolation or by zonal rebalance did not result in economy of the global computer time, however some acceleration had been verified in outer iterations. (M.C.K.) [pt

  7. Report on the present status of scientific and engineering accelerators in Japan (I)

    CERN Document Server

    2003-01-01

    For the purpose to know the present status of possible joint researches in use of accelerators in Japan, the Specialist Committee of Quantum Beam conducted a questionnaire to 69 organizations, of which 54 answered. The organizations have 97 accelerator facilities, which had 108 machines for research and educational purpose, and 7 for medical use. Of 97 facilities, 86 are found open for joint and cooperative researches. Based on the questionnaire results, following discussions are made: Definition and classification of quantum beam; Positioning of accelerators for research purpose among all machines in Japan (Increase of accelerator usage, economical scale and social contribution); Usage form of accelerators for research purpose (sort of accelerators, sort of secondary ions like neutron, synchrotron radiation, positron, radioisotope beam, muon and neutrino, high current accelerator for fusion, measurement and analyses, new elements, PET and gamma-ray); and The questionnaire results of the accelerators for rese...

  8. Software support for students engaging in scientific activity and scientific controversy

    Science.gov (United States)

    Cavalli-Sforza, Violetta; Weiner, Arlene W.; Lesgold, Alan M.

    Computer environments could support students in engaging in cognitive activities that are essential to scientific practice and to the understanding of the nature of scientific knowledge, but that are difficult to manage in science classrooms. The authors describe a design for a computer-based environment to assist students in conducting dialectical activities of constructing, comparing, and evaluating arguments for competing scientific theories. Their choice of activities and their design respond to educators' and theorists' criticisms of current science curricula. They give detailed specifications of portions of the environment.

  9. Proceedings of CAS - CERN Accelerator School: Course on Superconductivity for Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, R [European Organization for Nuclear Research, Geneva (Switzerland)

    2014-07-01

    These proceedings collate lectures given at the twenty-seventh specialized course organised by the CERN Accelerator School (CAS). The course was held at the Ettore Majorana Foundation and Centre for Scientific Culture (EMFCSC) in Erice, Italy, from 24 April to 4 May 2013. Following recapitulation lectures on basic accelerator physics and superconductivity, the course covered topics related to the design, production and operation of superconducting RF systems and superconducting magnets for accelerators. The participants pursued one of six case studies in order to get ’hands-on’ experience of the issues connected with the design of superconducting systems. A series of topical seminars completed the programme.

  10. Advanced Scientific Computing Research Network Requirements: ASCR Network Requirements Review Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Bacon, Charles [Argonne National Lab. (ANL), Argonne, IL (United States); Bell, Greg [ESnet, Berkeley, CA (United States); Canon, Shane [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dart, Eli [ESnet, Berkeley, CA (United States); Dattoria, Vince [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Goodwin, Dave [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Lee, Jason [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hicks, Susan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Holohan, Ed [Argonne National Lab. (ANL), Argonne, IL (United States); Klasky, Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lauzon, Carolyn [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Rogers, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Skinner, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tierney, Brian [ESnet, Berkeley, CA (United States)

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  11. 2014 Accelerators meeting, Grenoble

    International Nuclear Information System (INIS)

    Lucotte, Arnaud; Lamy, Thierry; De Conto, Jean-Marie; Fontaine, Alain; Revol, Jean-Luc; Nadolski, Laurent S.; Kazamias, Sophie; Vretenar, Maurizio; Ferrando, Philippe; Laune, Bernard; Vedrine, Pierre

    2014-10-01

    The Accelerators meeting is organised every two years by the Accelerators division of the French Society of Physics (SFP). It brings together about 50 participants during a one-day meeting. The morning sessions are devoted to scientific presentations while the afternoon is dedicated to technical visits of facilities. This document brings together the available presentations (slides): 1 - Presentation of the Laboratory of subatomic physics and cosmology - LPSC-Grenoble (Lucotte, Arnaud; Lamy, Thierry); 2 - Presentation of the Accelerators division of the French Society of Physics (Fontaine, Alain; Revol, Jean-Luc); 3 - Presentation of Grenoble's master diplomas in Accelerator physics (Nadolski, Laurent S.); 4 - Presentation of Paris' master diplomas in big instruments (Kazamias, Sophie); 5 - Particle accelerators and European Union's projects (Vretenar, Maurizio); 6 - French research infrastructures (Ferrando, Philippe); 7 - Coordination of accelerators activity in France (Laune, Bernard; Vedrine, Pierre)

  12. Computer application in scientific investigations

    International Nuclear Information System (INIS)

    Govorun, N.N.

    1981-01-01

    A short review of the computer development and application and software in JINR for the last 15 years is presented. Main trends of studies on computer application in experimental and theoretical investigations are enumerated: software of computers and their systems, software of data processing systems, designing automatic and automized systems for measuring track detectors images, development of technique of carrying out experiments on computer line, packets of applied computer codes and specialized systems. The development of the on line technique is successfully used in investigations of nuclear processes at relativistic energies. The new trend is the development of television methods of data output and its computer recording [ru

  13. CEBAF Accelerator Achievements

    International Nuclear Information System (INIS)

    Chao, Y C; Drury, M; Hovater, C; Hutton, A; Krafft, G A; Poelker, M; Reece, C; Tiefenback, M

    2011-01-01

    In the past decade, nuclear physics users of Jefferson Lab's Continuous Electron Beam Accelerator Facility (CEBAF) have benefited from accelerator physics advances and machine improvements. As of early 2011, CEBAF operates routinely at 6 GeV, with a 12 GeV upgrade underway. This article reports highlights of CEBAF's scientific and technological evolution in the areas of cryomodule refurbishment, RF control, polarized source development, beam transport for parity experiments, magnets and hysteresis handling, beam breakup, and helium refrigerator operational optimization.

  14. Electromagnetic modeling in accelerator designs

    International Nuclear Information System (INIS)

    Cooper, R.K.; Chan, K.C.D.

    1990-01-01

    Through the years, electromagnetic modeling using computers has proved to be a cost-effective tool for accelerator designs. Traditionally, electromagnetic modeling of accelerators has been limited to resonator and magnet designs in two dimensions. In recent years with the availability of powerful computers, electromagnetic modeling of accelerators has advanced significantly. Through the above conferences, it is apparent that breakthroughs have been made during the last decade in two important areas: three-dimensional modeling and time-domain simulation. Success in both these areas have been made possible by the increasing size and speed of computers. In this paper, the advances in these two areas will be described

  15. Health Psychology Bulletin : Improving Publication Practices to Accelerate Scientific Progress

    NARCIS (Netherlands)

    Peters, Gjalt-jorn Ygram; Kok, Gerjo; Crutzen, Rik; Sanderman, Robbert

    2017-01-01

    The instrument of scientific publishing, originally a necessary tool to enable development of a global science, has evolved relatively little in response to technological advances. Current scientific publishing practices incentivize a number of harmful approaches to research. Health Psychology

  16. Requirements for high performance computing for lattice QCD. Report of the ECFA working panel

    International Nuclear Information System (INIS)

    Jegerlehner, F.; Kenway, R.D.; Martinelli, G.; Michael, C.; Pene, O.; Petersson, B.; Petronzio, R.; Sachrajda, C.T.; Schilling, K.

    2000-01-01

    This report, prepared at the request of the European Committee for Future Accelerators (ECFA), contains an assessment of the High Performance Computing resources which will be required in coming years by European physicists working in Lattice Field Theory and a review of the scientific opportunities which these resources would open. (orig.)

  17. 8th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Gracia, José; Knüpfer, Andreas; Resch, Michael; Nagel, Wolfgang

    2015-01-01

    Numerical simulation and modelling using High Performance Computing has evolved into an established technique in academic and industrial research. At the same time, the High Performance Computing infrastructure is becoming ever more complex. For instance, most of the current top systems around the world use thousands of nodes in which classical CPUs are combined with accelerator cards in order to enhance their compute power and energy efficiency. This complexity can only be mastered with adequate development and optimization tools. Key topics addressed by these tools include parallelization on heterogeneous systems, performance optimization for CPUs and accelerators, debugging of increasingly complex scientific applications, and optimization of energy usage in the spirit of green IT. This book represents the proceedings of the 8th International Parallel Tools Workshop, held October 1-2, 2014 in Stuttgart, Germany – which is a forum to discuss the latest advancements in the parallel tools.

  18. N286.7-99, A Canadian standard specifying software quality management system requirements for analytical, scientific, and design computer programs and its implementation at AECL

    International Nuclear Information System (INIS)

    Abel, R.

    2000-01-01

    Analytical, scientific, and design computer programs (referred to in this paper as 'scientific computer programs') are developed for use in a large number of ways by the user-engineer to support and prove engineering calculations and assumptions. These computer programs are subject to frequent modifications inherent in their application and are often used for critical calculations and analysis relative to safety and functionality of equipment and systems. N286.7-99(4) was developed to establish appropriate quality management system requirements to deal with the development, modification, and application of scientific computer programs. N286.7-99 provides particular guidance regarding the treatment of legacy codes

  19. An Analysis on the Effect of Computer Self-Efficacy over Scientific Research Self-Efficacy and Information Literacy Self-Efficacy

    Science.gov (United States)

    Tuncer, Murat

    2013-01-01

    Present research investigates reciprocal relations amidst computer self-efficacy, scientific research and information literacy self-efficacy. Research findings have demonstrated that according to standardized regression coefficients, computer self-efficacy has a positive effect on information literacy self-efficacy. Likewise it has been detected…

  20. TU-FG-201-04: Computer Vision in Autonomous Quality Assurance of Linear Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Yu, H; Jenkins, C; Yu, S; Yang, Y; Xing, L [Stanford University, Stanford, CA (United States)

    2016-06-15

    Purpose: Routine quality assurance (QA) of linear accelerators represents a critical and costly element of a radiation oncology center. Recently, a system was developed to autonomously perform routine quality assurance on linear accelerators. The purpose of this work is to extend this system and contribute computer vision techniques for obtaining quantitative measurements for a monthly multi-leaf collimator (MLC) QA test specified by TG-142, namely leaf position accuracy, and demonstrate extensibility for additional routines. Methods: Grayscale images of a picket fence delivery on a radioluminescent phosphor coated phantom are captured using a CMOS camera. Collected images are processed to correct for camera distortions, rotation and alignment, reduce noise, and enhance contrast. The location of each MLC leaf is determined through logistic fitting and a priori modeling based on knowledge of the delivered beams. Using the data collected and the criteria from TG-142, a decision is made on whether or not the leaf position accuracy of the MLC passes or fails. Results: The locations of all MLC leaf edges are found for three different picket fence images in a picket fence routine to 0.1mm/1pixel precision. The program to correct for image alignment and determination of leaf positions requires a runtime of 21– 25 seconds for a single picket, and 44 – 46 seconds for a group of three pickets on a standard workstation CPU, 2.2 GHz Intel Core i7. Conclusion: MLC leaf edges were successfully found using techniques in computer vision. With the addition of computer vision techniques to the previously described autonomous QA system, the system is able to quickly perform complete QA routines with minimal human contribution.

  1. Higher-order ice-sheet modelling accelerated by multigrid on graphics cards

    Science.gov (United States)

    Brædstrup, Christian; Egholm, David

    2013-04-01

    Higher-order ice flow modelling is a very computer intensive process owing primarily to the nonlinear influence of the horizontal stress coupling. When applied for simulating long-term glacial landscape evolution, the ice-sheet models must consider very long time series, while both high temporal and spatial resolution is needed to resolve small effects. The use of higher-order and full stokes models have therefore seen very limited usage in this field. However, recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large-scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists working on ice flow models. Our current research focuses on utilising the GPU as a tool in ice-sheet and glacier modelling. To this extent we have implemented the Integrated Second-Order Shallow Ice Approximation (iSOSIA) equations on the device using the finite difference method. To accelerate the computations, the GPU solver uses a non-linear Red-Black Gauss-Seidel iterator coupled with a Full Approximation Scheme (FAS) multigrid setup to further aid convergence. The GPU finite difference implementation provides the inherent parallelization that scales from hundreds to several thousands of cores on newer cards. We demonstrate the efficiency of the GPU multigrid solver using benchmark experiments.

  2. DB90: A Fortran Callable Relational Database Routine for Scientific and Engineering Computer Programs

    Science.gov (United States)

    Wrenn, Gregory A.

    2005-01-01

    This report describes a database routine called DB90 which is intended for use with scientific and engineering computer programs. The software is written in the Fortran 90/95 programming language standard with file input and output routines written in the C programming language. These routines should be completely portable to any computing platform and operating system that has Fortran 90/95 and C compilers. DB90 allows a program to supply relation names and up to 5 integer key values to uniquely identify each record of each relation. This permits the user to select records or retrieve data in any desired order.

  3. Forward and adjoint spectral-element simulations of seismic wave propagation using hardware accelerators

    Science.gov (United States)

    Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri

    2015-04-01

    Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  4. On the Performance of the Python Programming Language for Serial and Parallel Scientific Computations

    Directory of Open Access Journals (Sweden)

    Xing Cai

    2005-01-01

    Full Text Available This article addresses the performance of scientific applications that use the Python programming language. First, we investigate several techniques for improving the computational efficiency of serial Python codes. Then, we discuss the basic programming techniques in Python for parallelizing serial scientific applications. It is shown that an efficient implementation of the array-related operations is essential for achieving good parallel performance, as for the serial case. Once the array-related operations are efficiently implemented, probably using a mixed-language implementation, good serial and parallel performance become achievable. This is confirmed by a set of numerical experiments. Python is also shown to be well suited for writing high-level parallel programs.

  5. Accelerating the Pace of Protein Functional Annotation With Intel Xeon Phi Coprocessors.

    Science.gov (United States)

    Feinstein, Wei P; Moreno, Juana; Jarrell, Mark; Brylinski, Michal

    2015-06-01

    Intel Xeon Phi is a new addition to the family of powerful parallel accelerators. The range of its potential applications in computationally driven research is broad; however, at present, the repository of scientific codes is still relatively limited. In this study, we describe the development and benchmarking of a parallel version of eFindSite, a structural bioinformatics algorithm for the prediction of ligand-binding sites in proteins. Implemented for the Intel Xeon Phi platform, the parallelization of the structure alignment portion of eFindSite using pragma-based OpenMP brings about the desired performance improvements, which scale well with the number of computing cores. Compared to a serial version, the parallel code runs 11.8 and 10.1 times faster on the CPU and the coprocessor, respectively; when both resources are utilized simultaneously, the speedup is 17.6. For example, ligand-binding predictions for 501 benchmarking proteins are completed in 2.1 hours on a single Stampede node equipped with the Intel Xeon Phi card compared to 3.1 hours without the accelerator and 36.8 hours required by a serial version. In addition to the satisfactory parallel performance, porting existing scientific codes to the Intel Xeon Phi architecture is relatively straightforward with a short development time due to the support of common parallel programming models by the coprocessor. The parallel version of eFindSite is freely available to the academic community at www.brylinski.org/efindsite.

  6. SCEE 2008 book of abstracts. The 7. international conference on scientific computing in electrical engineering (SCEE 2008)

    Energy Technology Data Exchange (ETDEWEB)

    Roos, J.; Costa, L.R.J. (ed.)

    2008-09-15

    SCEE is an international conference series dedicated to Scientific Computing in Electrical Engineering. The 7th International Conference on Scientific Computing in Electrical Engineering (SCEE 2008) in Espoo, Finland, is organized by the Helsinki University of Technology (TKK); Faculty of Electronics, Communications and Automation (ECA); Department of Radio Science and Engineering (RAD); Circuit Theory Group. (SCEE 2008 web site: http://www.ct.tkk.fi/scee2008/). The aim of the SCEE 2008 conference is to bring together scientists from academia and industry with the goal of intensive discussions on modeling and numerical simulation of electronic circuits and of electromagnetic fields. The conference is mainly directed towards mathematicians and electrical engineers. The SCEE 2008 conference has the following four main topics: 1. Computational Electromagnetics (CE), 2. Circuit Simulation (CS), 3. Coupled Problems (CP), 4. Mathematical and Computational Methods (CM). The selection of abstracts in this book was carried out by the Program Committee; each abstract was reviewed by two or three reviewers. The authors of all accepted abstracts were invited to submit an extended full paper, which will be reviewed as well. The accepted full papers will later on be published in a separate post-conference book

  7. Department of Accelerator Physics and Technology: Overview

    International Nuclear Information System (INIS)

    Pachan, M.

    1999-01-01

    Full text: As presented at the overview seminar held on December 98, the activities of the Department were shared among several directions of accelerator applications, as well as research and development works on new accelerator techniques and technologies. In the group of proton and ion accelerators, two main tasks were advanced. The first was a further step in the optimization of operational parameters of multicusp ion-source, prepared for axial injection system in C-30 cyclotron. Another one is the participation in important modifications of r.f. acceleration system in heavy-ion accelerator C-200 of Warsaw University. In the broad field of electron accelerators our main attention was directed at medical applications. Most important of them was the designing and construction of a full scale technological model of a high-gradient accelerating structure for low-energy radiotherapy unit CO-LINE 1000. Microwave measurements, and tuning were accomplished, and the technical documentation for construction of radiation unit completed. This work was supported by the State Committee for Scientific Research. Preparatory work was continued to undertake in the year 1999 the design of two new medical accelerators. First is a new generation radiotherapy unit, with 15 MeV electron beam and two selected energies of X-ray photons. This accelerator should in future replace the existing Neptun 10 MeV units. The work will be executed in the frame of the Project-Ordered commissioned by the State Committee for Scientific Research. The next type of accelerators in preparation is the mobile, self-shielded electron-beam unit for inter operative irradiation. The specification of parameters was completed and study of possible solutions advanced. The programme of medical accelerator development is critically dependent on the existence of a metrological and experimental basis. Therefore the building of a former proton linear accelerator was adopted to the new function as electron accelerators

  8. Teaching Scientific Computing: A Model-Centered Approach to Pipeline and Parallel Programming with C

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2015-01-01

    Full Text Available The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the key concepts in modern computing and electronics engineering. The modern computer science and engineering education requires a comprehensive curriculum, so the introduction to pipeline and parallel computing is the essential topic to be included in the curriculum. At the same time, the topic is among the most motivating tasks due to the comprehensive multidisciplinary and technical requirements. To enhance the educational process, the paper proposes a novel model-centered framework and develops the relevant learning objects. It allows implementing an educational platform of constructivist learning process, thus enabling learners’ experimentation with the provided programming models, obtaining learners’ competences of the modern scientific research and computational thinking, and capturing the relevant technical knowledge. It also provides an integral platform that allows a simultaneous and comparative introduction to pipelining and parallel computing. The programming language C for developing programming models and message passing interface (MPI and OpenMP parallelization tools have been chosen for implementation.

  9. A data management system for engineering and scientific computing

    Science.gov (United States)

    Elliot, L.; Kunii, H. S.; Browne, J. C.

    1978-01-01

    Data elements and relationship definition capabilities for this data management system are explicitly tailored to the needs of engineering and scientific computing. System design was based upon studies of data management problems currently being handled through explicit programming. The system-defined data element types include real scalar numbers, vectors, arrays and special classes of arrays such as sparse arrays and triangular arrays. The data model is hierarchical (tree structured). Multiple views of data are provided at two levels. Subschemas provide multiple structural views of the total data base and multiple mappings for individual record types are supported through the use of a REDEFINES capability. The data definition language and the data manipulation language are designed as extensions to FORTRAN. Examples of the coding of real problems taken from existing practice in the data definition language and the data manipulation language are given.

  10. Modern computer networks and distributed intelligence in accelerator controls

    International Nuclear Information System (INIS)

    Briegel, C.

    1991-01-01

    Appropriate hardware and software network protocols are surveyed for accelerator control environments. Accelerator controls network topologies are discussed with respect to the following criteria: vertical versus horizontal and distributed versus centralized. Decision-making considerations are provided for accelerator network architecture specification. Current trends and implementations at Fermilab are discussed

  11. Interacting with accelerators

    International Nuclear Information System (INIS)

    Dasgupta, S.

    1994-01-01

    Accelerators are research machines which produce energetic particle beam for use as projectiles to effect nuclear reactions. These machines along with their services and facilities may occupy very large areas. The man-machine interface of accelerators has evolved with technological changes in the computer industry and may be partitioned into three phases. The present paper traces the evolution of man-machine interface from the earliest accelerators to the present computerized systems incorporated in modern accelerators. It also discusses the advantages of incorporating expert system technology for assisting operators. (author). 8 ref

  12. An accelerated conjugate gradient algorithm to compute low-lying eigenvalues - a study for the Dirac operator in SU(2) lattice QCD

    International Nuclear Information System (INIS)

    Kalkreuter, T.; Simma, H.

    1995-07-01

    The low-lying eigenvalues of a (sparse) hermitian matrix can be computed with controlled numerical errors by a conjugate gradient (CG) method. This CG algorithm is accelerated by alternating it with exact diagonalizations in the subspace spanned by the numerically computed eigenvectors. We study this combined algorithm in case of the Dirac operator with (dynamical) Wilson fermions in four-dimensional SU(2) gauge fields. The algorithm is numerically very stable and can be parallelized in an efficient way. On lattices of sizes 4 4 - 16 4 an acceleration of the pure CG method by a factor of 4 - 8 is found. (orig.)

  13. Permanent-magnet material applications in particle accelerators

    International Nuclear Information System (INIS)

    Kraus, R.H. Jr.

    1992-01-01

    The modern charged particle accelerator has found application in a wide range of scientific research, industrial, medical, and defense fields. Researchers began to use permanent-magnet materials in particle accelerators soon after the invention of the alternating gradient principle, which showed that magnetic field could be used to control the transverse envelope of charged particle beams. The history of permanent-magnet use in accelerator physics and technology is outlined, current design methods and material properties of concern for particle accelerator applications are reviewed

  14. A review of prospects for an accelerator breeder

    International Nuclear Information System (INIS)

    Fraser, J.S.; Hoffman, C.R.; Schriber, S.O.; Garvey, P.M.; Townes, B.M.

    1981-12-01

    The scientific feasibility, engineering practicability and economic prospects for an Accelerator Breeder are reviewed. The scientific feasibiliity of high power accelerator components rests on a firm basis as a result of technical advances made in recent years but there is a need to combine all components in a demonstration model working under realistic conditions. The engineering practicability of Accelerator Breeder components should be tested in a staged development culminating in a full-scale demonstration plant. The economic assessment depends on calculations of allowed and estimated capital costs of an Accelerator Breeder for a CANDU system operating on the Th-U fuel cycle. The results indicate that the ratio of estimated to allowed capital cost is approximately 3.5 for a breeder with a 2% enriched uranium metal blanket and for separated U235 valued at 48 $/g

  15. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  16. Theoretical problems in accelerator physics

    International Nuclear Information System (INIS)

    1992-01-01

    This report discusses the following research on accelerators: computational methods; higher order mode suppression in accelerators structures; overmoded waveguide components and application to SLED II and power transport; rf sources; accelerator cavity design for a B factory asymmetric collider; and photonic band gap cavities

  17. Software Innovations Speed Scientific Computing

    Science.gov (United States)

    2012-01-01

    To help reduce the time needed to analyze data from missions like those studying the Sun, Goddard Space Flight Center awarded SBIR funding to Tech-X Corporation of Boulder, Colorado. That work led to commercial technologies that help scientists accelerate their data analysis tasks. Thanks to its NASA work, the company doubled its number of headquarters employees to 70 and generated about $190,000 in revenue from its NASA-derived products.

  18. Using Equation-Free Computation to Accelerate Network-Free Stochastic Simulation of Chemical Kinetics.

    Science.gov (United States)

    Lin, Yen Ting; Chylek, Lily A; Lemons, Nathan W; Hlavacek, William S

    2018-06-21

    The chemical kinetics of many complex systems can be concisely represented by reaction rules, which can be used to generate reaction events via a kinetic Monte Carlo method that has been termed network-free simulation. Here, we demonstrate accelerated network-free simulation through a novel approach to equation-free computation. In this process, variables are introduced that approximately capture system state. Derivatives of these variables are estimated using short bursts of exact stochastic simulation and finite differencing. The variables are then projected forward in time via a numerical integration scheme, after which a new exact stochastic simulation is initialized and the whole process repeats. The projection step increases efficiency by bypassing the firing of numerous individual reaction events. As we show, the projected variables may be defined as populations of building blocks of chemical species. The maximal number of connected molecules included in these building blocks determines the degree of approximation. Equation-free acceleration of network-free simulation is found to be both accurate and efficient.

  19. Open scientific communication urged

    Science.gov (United States)

    Richman, Barbara T.

    In a report released last week the National Academy of Sciences' Panel on Scientific Communication and National Security concluded that the ‘limited and uncertain benefits’ of controls on the dissemination of scientific and technological research are ‘outweighed by the importance of scientific progress, which open communication accelerates, to the overall welfare of the nation.’ The 18-member panel, chaired by Dale R. Corson, president emeritus of Cornell University, was created last spring (Eos, April 20, 1982, p. 241) to examine the delicate balance between open dissemination of scientific and technical information and the U.S. government's desire to protect scientific and technological achievements from being translated into military advantages for our political adversaries.The panel dealt almost exclusively with the relationship between the United States and the Soviet Union but noted that there are ‘clear problems in scientific communication and national security involving Third World countries.’ Further study of this matter is necessary.

  20. Computer simulation of 2-D and 3-D ion beam extraction and acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Ido, Shunji; Nakajima, Yuji [Saitama Univ., Urawa (Japan). Faculty of Engineering

    1997-03-01

    The two-dimensional code and the three-dimensional code have been developed to study the physical features of the ion beams in the extraction and acceleration stages. By using the two-dimensional code, the design of first electrode(plasma grid) is examined in regard to the beam divergence. In the computational studies by using the three-dimensional code, the axis-off model of ion beam is investigated. It is found that the deflection angle of ion beam is proportional to the gap displacement of the electrodes. (author)

  1. Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces

    International Nuclear Information System (INIS)

    Brown, W. Michael; Wang, Peng; Plimpton, Steven J.; Tharrington, Arnold N.

    2011-01-01

    The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - (1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory, (2) minimizing the amount of code that must be ported for efficient acceleration, (3) utilizing the available processing power from both many-core CPUs and accelerators, and (4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.

  2. QALMA: A computational toolkit for the analysis of quality protocols for medical linear accelerators in radiation therapy

    Science.gov (United States)

    Rahman, Md Mushfiqur; Lei, Yu; Kalantzis, Georgios

    2018-01-01

    Quality Assurance (QA) for medical linear accelerator (linac) is one of the primary concerns in external beam radiation Therapy. Continued advancements in clinical accelerators and computer control technology make the QA procedures more complex and time consuming which often, adequate software accompanied with specific phantoms is required. To ameliorate that matter, we introduce QALMA (Quality Assurance for Linac with MATLAB), a MALAB toolkit which aims to simplify the quantitative analysis of QA for linac which includes Star-Shot analysis, Picket Fence test, Winston-Lutz test, Multileaf Collimator (MLC) log file analysis and verification of light & radiation field coincidence test.

  3. Radio-frequency quadrupole linear accelerator

    International Nuclear Information System (INIS)

    Wangler, T.P.; Stokes, R.H.

    1980-01-01

    The radio-frequency quadrupole (RFQ) is a new linear accelerator concept in which rf electric fields are used to focus, bunch, and accelerate the beam. Because the RFQ can provide strong focusing at low velocities, it can capture a high-current dc ion beam from a low-voltage source and accelerate it to an energy of 1 MeV/nucleon within a distance of a few meters. A recent experimental test at the Los Alamos Scientific Laboratory (LASL) has confirmed the expected performance of this structure and has stimulated interest in a wide variety of applications. The general properties of the RFQ are reviewed and examples of applications of this new accelerator are presented

  4. International conference on sub-critical accelerator driven systems. Proceedings

    International Nuclear Information System (INIS)

    Litovkina, L.P.; Titarenko, Yu.E.

    1999-01-01

    The International Meeting on Sub-Critical Accelerator Driven Systems was organized by the State Scientific Center - Institute for Theoretical and Experimental Physics with participation of Atomic Ministry of RF. The Meeting objective was to analyze the recent achievements and tendencies of the accelerator-driven systems development. The Meeting program covers a broad range of problems including the accelerator-driven systems (ADS) conceptual design; analyzing the ADS role in nuclear fuel cycle; accuracy of modeling the main parameters of ADS; conceptual design of high-current accelerators. Moreover, the results of recent experimental and theoretical studies on nuclear data accumulation to support the ADS technologies are presented. About 70 scientists from the main scientific centers of Russia, as well as scientists from USA, France, Belgium, India, and Yugoslavia, attended the meeting and presented 44 works [ru

  5. COMPUTATIONAL SCIENCE CENTER

    International Nuclear Information System (INIS)

    DAVENPORT, J.

    2006-01-01

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together

  6. FMIT accelerator

    International Nuclear Information System (INIS)

    Armstrong, D.D.

    1983-01-01

    A 35-MeV 100-mA cw linear accelerator is being designed by Los Alamos for use in the Fusion Materials Irradiation Test (FMIT) Facility. Essential to this program is the design, construction, and evaluation of performance of the accelerator's injector, low-energy beam transport, and radio-frequency quadrupole sections before they are shipped to the facility site. The installation and testing of some of these sections have begun as well as the testing of the rf, noninterceptive beam diagnostics, computer control, dc power, and vacuum systems. An overview of the accelerator systems and the performance to date is given

  7. 20 years PSI accelerator. The speeches

    International Nuclear Information System (INIS)

    1994-01-01

    This publication contains the text of four papers presented at the occasion of the 20 year Symposium of the PSI accelerator. The papers dealt with the following topics: Scientific research and its dual interaction with industry and with the general public, the history of the PSI accelerator, μ-n-γ investigations on high temperature superconductors, therapy with charged particles. figs., tabs., refs

  8. A Fast GPU-accelerated Mixed-precision Strategy for Fully NonlinearWater Wave Computations

    DEFF Research Database (Denmark)

    Glimberg, Stefan Lemvig; Engsig-Karup, Allan Peter; Madsen, Morten G.

    2011-01-01

    We present performance results of a mixed-precision strategy developed to improve a recently developed massively parallel GPU-accelerated tool for fast and scalable simulation of unsteady fully nonlinear free surface water waves over uneven depths (Engsig-Karup et.al. 2011). The underlying wave......-preconditioned defect correction method. The improved strategy improves the performance by exploiting architectural features of modern GPUs for mixed precision computations and is tested in a recently developed generic library for fast prototyping of PDE solvers. The new wave tool is applicable to solve and analyze...

  9. Semiempirical Quantum Chemical Calculations Accelerated on a Hybrid Multicore CPU-GPU Computing Platform.

    Science.gov (United States)

    Wu, Xin; Koslowski, Axel; Thiel, Walter

    2012-07-10

    In this work, we demonstrate that semiempirical quantum chemical calculations can be accelerated significantly by leveraging the graphics processing unit (GPU) as a coprocessor on a hybrid multicore CPU-GPU computing platform. Semiempirical calculations using the MNDO, AM1, PM3, OM1, OM2, and OM3 model Hamiltonians were systematically profiled for three types of test systems (fullerenes, water clusters, and solvated crambin) to identify the most time-consuming sections of the code. The corresponding routines were ported to the GPU and optimized employing both existing library functions and a GPU kernel that carries out a sequence of noniterative Jacobi transformations during pseudodiagonalization. The overall computation times for single-point energy calculations and geometry optimizations of large molecules were reduced by one order of magnitude for all methods, as compared to runs on a single CPU core.

  10. Department of Accelerator Physics and Technology: Overview

    International Nuclear Information System (INIS)

    Plawski, E.

    2004-01-01

    Full text: Due to the drastic reduction (in previous years) of scientific and technical staff of the Department, our basic work in 2003 was limited to the following subjects: - the development of radiographic 4 MeV electron accelerator, - computational verification of basic parameters of a simplified version of ''6/15 MeV'' medical accelerator. - continuation of the study of photon and electron spectra of narrow photon beams with the use of the BEAMnrc Monte Carlo codes, - a study of accelerating and deflecting travelling wave RF structures based on experience already gained. The small 4-6 MeV electron linac was constructed in the Department as a tool for radiographic services which may be offered by our Institute. In 2003, the most important sub-units of the accelerator were constructed and completed. Accelerated electron beam intensity up to 80 mA was already obtained and for the following year the energy spectrum measurement, energy and intensity optimisation for e - /X-ray conversion and also first exposures are planned. Because in the realisation of the 6/15 MeV Accelerator Project, the Department was responsible for calculations of beam guiding and acceleration (accelerating section with triode electron gun, beam focusing, achromatic deviation), last year some verifying computations were done. This concerned mainly the influence of the variation of gun injection energy and RF frequency shifts on beam dynamics. The computational codes written in the Department are still used and continuously developed for this and similar purposes. The triode gun, originally thought as a part of 6/15 MeV medical accelerator, is on long term testing, showing very good performance; a new pulse modulator for that sub-unit was designed. The Monte Carlo calculations of narrow photon beams are continued. Intensity modulated radiation therapy (IMRT) is expected to play a dominant role in the years to come. Our principal researcher hereafter receiving PhD degree collaborates on IMRT

  11. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Hirayama, H.; Ban, S.; Nakamura, T.

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  12. 2006 XSD Scientific Software Workshop report.

    Energy Technology Data Exchange (ETDEWEB)

    Evans, K., Jr.; De Carlo, F.; Jemian, P.; Lang, J.; Lienert, U.; Maclean, J.; Newville, M.; Tieman, B.; Toby, B.; van Veenendaal, B.; Univ. of Chicago

    2006-01-22

    In May of 2006, a committee was formed to assess the fundamental needs and opportunities in scientific software for x-ray data reduction, analysis, modeling, and simulation. This committee held a series of discussions throughout the summer, conducted a poll of the members of the x-ray community, and held a workshop. This report details the findings and recommendations of the committee. Each experiment performed at the APS requires three crucial ingredients: the powerful x-ray source, an optimized instrument to perform measurements, and computer software to acquire, visualize, and analyze the experimental observations. While the APS has invested significant resources in the accelerator, investment in other areas such as scientific software for data analysis and visualization has lagged behind. This has led to the adoption of a wide variety of software with variable levels of usability. In order to maximize the scientific output of the APS, it is essential to support the broad development of real-time analysis and data visualization software. As scientists attack problems of increasing sophistication and deal with larger and more complex data sets, software is playing an ever more important role. Furthermore, our need for excellent and flexible scientific software can only be expected to increase, as the upgrade of the APS facility and the implementation of advanced detectors create a host of new measurement capabilities. New software analysis tools must be developed to take full advantage of these capabilities. It is critical that the APS take the lead in software development and the implementation of theory to software to ensure the continued success of this facility. The topics described in this report are relevant to the APS today and critical for the APS upgrade plan. Implementing these recommendations will have a positive impact on the scientific productivity of the APS today and will be even more critical in the future.

  13. The Goal Specificity Effect on Strategy Use and Instructional Efficiency during Computer-Based Scientific Discovery Learning

    Science.gov (United States)

    Kunsting, Josef; Wirth, Joachim; Paas, Fred

    2011-01-01

    Using a computer-based scientific discovery learning environment on buoyancy in fluids we investigated the "effects of goal specificity" (nonspecific goals vs. specific goals) for two goal types (problem solving goals vs. learning goals) on "strategy use" and "instructional efficiency". Our empirical findings close an important research gap,…

  14. Application of local area networks to accelerator control systems at the Stanford Linear Accelerator

    International Nuclear Information System (INIS)

    Fox, J.D.; Linstadt, E.; Melen, R.

    1983-03-01

    The history and current status of SLAC's SDLC networks for distributed accelerator control systems are discussed. These local area networks have been used for instrumentation and control of the linear accelerator. Network topologies, protocols, physical links, and logical interconnections are discussed for specific applications in distributed data acquisition and control system, computer networks and accelerator operations

  15. Availability measurement of grid services from the perspective of a scientific computing centre

    International Nuclear Information System (INIS)

    Marten, H; Koenig, T

    2011-01-01

    The Karlsruhe Institute of Technology (KIT) is the merger of Forschungszentrum Karlsruhe and the Technical University Karlsruhe. The Steinbuch Centre for Computing (SCC) was one of the first new organizational units of KIT, combining the former Institute for Scientific Computing of Forschungszentrum Karlsruhe and the Computing Centre of the University. IT service management according to the worldwide de-facto-standard 'IT Infrastructure Library (ITIL)' was chosen by SCC as a strategic element to support the merging of the two existing computing centres located at a distance of about 10 km. The availability and reliability of IT services directly influence the customer satisfaction as well as the reputation of the service provider, and unscheduled loss of availability due to hardware or software failures may even result in severe consequences like data loss. Fault tolerant and error correcting design features are reducing the risk of IT component failures and help to improve the delivered availability. The ITIL process controlling the respective design is called Availability Management. This paper discusses Availability Management regarding grid services delivered to WLCG and provides a few elementary guidelines for availability measurements and calculations of services consisting of arbitrary numbers of components.

  16. Computer codes and methods for simulating accelerator driven systems

    International Nuclear Information System (INIS)

    Sartori, E.; Byung Chan Na

    2003-01-01

    A large set of computer codes and associated data libraries have been developed by nuclear research and industry over the past half century. A large number of them are in the public domain and can be obtained under agreed conditions from different Information Centres. The areas covered comprise: basic nuclear data and models, reactor spectra and cell calculations, static and dynamic reactor analysis, criticality, radiation shielding, dosimetry and material damage, fuel behaviour, safety and hazard analysis, heat conduction and fluid flow in reactor systems, spent fuel and waste management (handling, transportation, and storage), economics of fuel cycles, impact on the environment of nuclear activities etc. These codes and models have been developed mostly for critical systems used for research or power generation and other technological applications. Many of them have not been designed for accelerator driven systems (ADS), but with competent use, they can be used for studying such systems or can form the basis for adapting existing methods to the specific needs of ADS's. The present paper describes the types of methods, codes and associated data available and their role in the applications. It provides Web addresses for facilitating searches for such tools. Some indications are given on the effect of non appropriate or 'blind' use of existing tools to ADS. Reference is made to available experimental data that can be used for validating the methods use. Finally, some international activities linked to the different computational aspects are described briefly. (author)

  17. 1989 CERN school of computing

    International Nuclear Information System (INIS)

    Verkerk, C.

    1990-01-01

    These Proceedings contain written versions of lectures delivered at the 1989 CERN School of Computing and covering a variety of topics. Vector and parallel computing are the subjects of five papers: experience with vector processors in HEP; scientific computing with transputers; applications of large transputer arrays; neutral networks and neurocomputers; and Amdahl's scaling law. Data-acquisition and event reconstruction methods were covered in two series of lectures reproduced here: microprocessor-based data-acquisition systems for HERA experiments, and track and vertex fitting. Two speakers treated applications of expert systems: artificial intelligence and expert systems - applications in data acquisition, and application of diagnostic expert systems in the accelerator domain. Other lecture courses covered past and present computing practice: 30 years of computing at CERN, and information processing at the Aerospace Institute in the Federal Republic of Germany. Other papers cover the contents of lectures on: HEPnet, where we are and where we are going; ciphering algorithms; integrated circuit design for supercomputers; GaAs versus Si, theory and practice; and an introduction to operating systems. (orig.)

  18. Department of Accelerator Physics and Technology: Overview

    International Nuclear Information System (INIS)

    Plawski, E.

    2003-01-01

    Full text: The main activities of the Accelerator Physics and Technology Department were focused on following subjects: - contribution to development and building of New Therapeutical Electron Accelerator delivering the photon beams of 6 and 15 MeV, - study of the photon and electron spectra of narrow photon beams with the use of the BEAM/EGSnrc codes, - design and construction of special RF structures for use in CLIC Test Facility in CERN, - design and construction of 1:1 copper, room temperature models of accelerating superconducting 1.3 GHz structures for TESLA Project in DESY. In spite of drastic reduction of scientific and technical staff (from 16 to 10 persons) the planned works were successfully completed, but requested some extraordinary efforts. In realisation of 6/15 MeV Accelerator Project, the Department was responsible all along the project for calculations of all most important parts (electron gun, accelerating structure, beam focusing, achromatic deviation) and also for construction and physical modelling of some strategic subassemblies. The results of scientific and technical achievements of our Department in this work are documented in the Annex to Final Report on realisation of KBN Scientific Project No PBZ 009-13 and earlier Annual Reports 2000 and 2001. The results of Monte Carlo calculations of narrow photon beams and experimental verification using Varian Clinac 2003CD, Simens Mevatron and CGR MeV Saturn accelerators ended up with PhD thesis prepared by MSc Anna Wysocka. Her thesis: Collimation and Dosimetry of X-ray Beams for Stereotactic Radiotherapy with Linear Accelerators was sponsored by KBN scientific Project Nr T11E 04121. In collaboration with LNF INFN Frascati the electron beam deflectors were designed for CERN CLIC Test Facility CTF3. These special type travelling wave RF structures were built by our Department and are actually operated in CTF3 experiment. As the result of collaboration with TESLA-FEL Project in DESY, the set of RF

  19. Accelerator Disaster Scenarios, the Unabomber, and Scientific Risks

    OpenAIRE

    Kapusta, Joseph I.

    2008-01-01

    The possibility that experiments at high-energy accelerators could create new forms of matter that would ultimately destroy the Earth has been considered several times in the past quarter century. One consequence of the earliest of these disaster scenarios was that the authors of a 1993 article in "Physics Today" who reviewed the experiments that had been carried out at the Bevalac at Lawrence Berkeley Laboratory were placed on the FBI's Unabomber watch list. Later, concerns that experiments ...

  20. Research of Virtual Accelerator Control System

    Institute of Scientific and Technical Information of China (English)

    DongJinmei; YuanYoujin; ZhengJianhua

    2003-01-01

    A Virtual Accelerator is a computer process which simulates behavior of beam in an accelerator and responds to the accelerator control program under development in a same way as an actual accelerator. To realize Virtual Accelerator, control system should provide the same program interface to top layer Application Control Program, it can make 'Real Accelerator' and 'Virtual Accelerator'use the same GUI, so control system should have a layer to hide hardware details, Application Control Program access control devices through logical name but not through coded hardware address. Without this layer, it is difficult to develop application program which can access both 'Virtual' and 'Real' Accelerators using same program interfaces. For this reason, we can create CSR Runtime Database which allows application program to access hardware devices and data on a simulation process in a unified way. A device 'is represented as a collection of records in CSR Runtime Database. A control program on host computer can access devices in the system only through names of record fields, called channel.

  1. Computer applications: Automatic control system for high-voltage accelerator

    International Nuclear Information System (INIS)

    Bryukhanov, A.N.; Komissarov, P.Yu.; Lapin, V.V.; Latushkin, S.T.. Fomenko, D.E.; Yudin, L.I.

    1992-01-01

    An automatic control system for a high-voltage electrostatic accelerator with an accelerating potential of up to 500 kV is described. The electronic apparatus on the high-voltage platform is controlled and monitored by means of a fiber-optic data-exchange system. The system is based on CAMAC modules that are controlled by a microprocessor crate controller. Data on accelerator operation are represented and control instructions are issued by means of an alphanumeric terminal. 8 refs., 6 figs

  2. Center for Technology for Advanced Scientific Component Software (TASCS)

    Energy Technology Data Exchange (ETDEWEB)

    Damevski, Kostadin [Virginia State Univ., Petersburg, VA (United States)

    2009-03-30

    A resounding success of the Scientific Discover through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedened computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative hig-performance scientific computing.

  3. Amplify scientific discovery with artificial intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Gil, Yolanda; Greaves, Mark T.; Hendler, James; Hirsch, Hyam

    2014-10-10

    Computing innovations have fundamentally changed many aspects of scientific inquiry. For example, advances in robotics, high-end computing, networking, and databases now underlie much of what we do in science such as gene sequencing, general number crunching, sharing information between scientists, and analyzing large amounts of data. As computing has evolved at a rapid pace, so too has its impact in science, with the most recent computing innovations repeatedly being brought to bear to facilitate new forms of inquiry. Recently, advances in Artificial Intelligence (AI) have deeply penetrated many consumer sectors, including for example Apple’s Siri™ speech recognition system, real-time automated language translation services, and a new generation of self-driving cars and self-navigating drones. However, AI has yet to achieve comparable levels of penetration in scientific inquiry, despite its tremendous potential in aiding computers to help scientists tackle tasks that require scientific reasoning. We contend that advances in AI will transform the practice of science as we are increasingly able to effectively and jointly harness human and machine intelligence in the pursuit of major scientific challenges.

  4. Highly Productive Application Development with ViennaCL for Accelerators

    Science.gov (United States)

    Rupp, K.; Weinbub, J.; Rudolf, F.

    2012-12-01

    for types from the Eigen library [8] and MTL 4 [9] are provided as well, enabling a seamless transition from single-core CPU to GPU and multi-core CPU computations. Case studies from the numerical solution of PDEs are given and isolated performance benchmarks are discussed. Also, pitfalls in scientific computing with GPUs and accelerators are addressed, allowing for a first evaluation of whether these novel devices can be mapped well to certain applications. References: [1] R. Bordawekar et al., Technical Report, IBM, 2010 [2] ViennaCL library. Online: http://viennacl.sourceforge.net/ [3] K. Rupp et al., GPUScA, 2010 [4] MAGMA library. Online: http://icl.cs.utk.edu/magma/ [5] Cusp library. Online: http://code.google.com/p/cusp-library/ [6] uBLAS library. Online: http://www.boost.org/libs/numeric/ublas/ [7] Boost C++ Libraries. Online: http://www.boost.org/ [8] Eigen library. Online: http://eigen.tuxfamily.org/ [9] MTL 4 Library. Online: http://www.mtl4.org/

  5. Centralized digital control of accelerators

    International Nuclear Information System (INIS)

    Melen, R.E.

    1983-09-01

    In contrasting the title of this paper with a second paper to be presented at this conference entitled Distributed Digital Control of Accelerators, a potential reader might be led to believe that this paper will focus on systems whose computing intelligence is centered in one or more computers in a centralized location. Instead, this paper will describe the architectural evolution of SLAC's computer based accelerator control systems with respect to the distribution of their intelligence. However, the use of the word centralized in the title is appropriate because these systems are based on the use of centralized large and computationally powerful processors that are typically supported by networks of smaller distributed processors

  6. An Examination of Resonance, Acceleration, and Particle Dynamics in the Micro-Accelerator Platform

    International Nuclear Information System (INIS)

    McNeur, Josh; Rosenzweig, J. B.; Travish, G.; Zhou, J.; Yoder, R.

    2010-01-01

    An effort to build a micron-scale dielectric-based slab-symmetric accelerator is underway at UCLA. The structure achieves acceleration via a resonant accelerating mode that is excited in an approximately 800 nm wide vacuum gap by a side coupled 800 nm laser. Detailed simulation results on structure fields and particle dynamics, using HFSS and VORPAL, are presented. We examine the quality factors of the accelerating modes for various structures and the excitations of non-accelerating destructive modes. Additionally, the results of an analytic and computational study of focusing, longitudinal dynamics and acceleration are described. Methods for achieving simultaneous transverse and longitudinal focusing are discussed, including modification of structure dimensions and slow variation of the coupling periodicity.

  7. CAS Accelerator Physics held in Erice, Italy

    CERN Multimedia

    CERN Accelerator School

    2013-01-01

    The CERN Accelerator School (CAS) recently organised a specialised course on Superconductivity for Accelerators, held at the Ettore Majorana Foundation and Centre for Scientific Culture in Erice, Italy from 24 April-4 May, 2013.   Photo courtesy of Alessandro Noto, Ettore Majorana Foundation and Centre for Scientific Culture. Following a handful of summary lectures on accelerator physics and the fundamental processes of superconductivity, the course covered a wide range of topics related to superconductivity and highlighted the latest developments in the field. Realistic case studies and topical seminars completed the programme. The school was very successful with 94 participants representing 23 nationalities, coming from countries as far away as Belorussia, Canada, China, India, Japan and the United States (for the first time a young Ethiopian lady, studying in Germany, attended this course). The programme comprised 35 lectures, 3 seminars and 7 hours of case study. The case studies were p...

  8. Accelerator & Fusion Research Division: 1993 Summary of activities

    Energy Technology Data Exchange (ETDEWEB)

    Chew, J.

    1994-04-01

    The Accelerator and Fusion Research Division (AFRD) is not only one of the largest scientific divisions at LBL, but also the one of the most diverse. Major efforts include: (1) investigations in both inertial and magnetic fusion energy; (2) operation of the Advanced Light Source, a state-of-the-art synchrotron radiation facility; (3) exploratory investigations of novel radiation sources and colliders; (4) research and development in superconducting magnets for accelerators and other scientific and industrial applications; and (5) ion beam technology development for nuclear physics and for industrial and biomedical applications. Each of these topics is discussed in detail in this book.

  9. The OSG Open Facility: an on-ramp for opportunistic scientific computing

    Science.gov (United States)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.

    2017-10-01

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  10. The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Jayatilaka, B. [Fermilab; Levshina, T. [Fermilab; Sehgal, C. [Fermilab; Gardner, R. [Chicago U.; Rynge, M. [USC - ISI, Marina del Rey; Würthwein, F. [UC, San Diego

    2017-11-22

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  11. Advanced quadrature sets and acceleration and preconditioning techniques for the discrete ordinates method in parallel computing environments

    Science.gov (United States)

    Longoni, Gianluca

    In the nuclear science and engineering field, radiation transport calculations play a key-role in the design and optimization of nuclear devices. The linear Boltzmann equation describes the angular, energy and spatial variations of the particle or radiation distribution. The discrete ordinates method (S N) is the most widely used technique for solving the linear Boltzmann equation. However, for realistic problems, the memory and computing time require the use of supercomputers. This research is devoted to the development of new formulations for the SN method, especially for highly angular dependent problems, in parallel environments. The present research work addresses two main issues affecting the accuracy and performance of SN transport theory methods: quadrature sets and acceleration techniques. New advanced quadrature techniques which allow for large numbers of angles with a capability for local angular refinement have been developed. These techniques have been integrated into the 3-D SN PENTRAN (Parallel Environment Neutral-particle TRANsport) code and applied to highly angular dependent problems, such as CT-Scan devices, that are widely used to obtain detailed 3-D images for industrial/medical applications. In addition, the accurate simulation of core physics and shielding problems with strong heterogeneities and transport effects requires the numerical solution of the transport equation. In general, the convergence rate of the solution methods for the transport equation is reduced for large problems with optically thick regions and scattering ratios approaching unity. To remedy this situation, new acceleration algorithms based on the Even-Parity Simplified SN (EP-SSN) method have been developed. A new stand-alone code system, PENSSn (Parallel Environment Neutral-particle Simplified SN), has been developed based on the EP-SSN method. The code is designed for parallel computing environments with spatial, angular and hybrid (spatial/angular) domain

  12. Implementation of Hardware Accelerators on Zynq

    DEFF Research Database (Denmark)

    Toft, Jakob Kenn

    of the ARM Cortex-9 processor featured on the Zynq SoC, with regard to execution time, power dissipation and energy consumption. The implementation of the hardware accelerators were successful. Use of the Monte Carlo processor resulted in a significant increase in performance. The Telco hardware accelerator......In the recent years it has become obvious that the performance of general purpose processors are having trouble meeting the requirements of high performance computing applications of today. This is partly due to the relatively high power consumption, compared to the performance, of general purpose...... processors, which has made hardware accelerators an essential part of several datacentres and the worlds fastest super-computers. In this work, two different hardware accelerators were implemented on a Xilinx Zynq SoC platform mounted on the ZedBoard platform. The two accelerators are based on two different...

  13. Industrial accelerators and their applications

    CERN Document Server

    Hamm, Marianne E

    2012-01-01

    This unique new book is a comprehensive review of the many current industrial applications of particle accelerators, written by experts in each of these fields. Readers will gain a broad understanding of the principles of these applications, the extent to which they are employed, and the accelerator technology utilized. The book also serves as a thorough introduction to these fields for non-experts and laymen. Due to the increased interest in industrial applications, there is a growing interest among accelerator physicists and many other scientists worldwide in understanding how accelerators are used in various applications. The government agencies that fund scientific research with accelerators are also seeking more information on the many commercial applications that have been or can be developed with the technology developments they are funding. Many industries are also doing more research on how they can improve their products or processes using particle beams.

  14. US EPA - A*Star Partnership - Accelerating the Acceptance of ...

    Science.gov (United States)

    The path for incorporating new alternative methods and technologies into quantitative chemical risk assessment poses a diverse set of scientific challenges. Some of these challenges include development of relevant and predictive test systems and computational models to integrate and extrapolate experimental data, and rapid characterization and acceptance of these systems and models. The series of presentations will highlight a collaborative effort between the U.S. Environmental Protection Agency (EPA) and the Agency for Science, Technology and Research (A*STAR) that is focused on developing and applying experimental and computational models for predicting chemical-induced liver and kidney toxicity, brain angiogenesis, and blood-brain-barrier formation. In addressing some of these challenges, the U.S. EPA and A*STAR collaboration will provide a glimpse of what chemical risk assessments could look like in the 21st century. Presentation on US EPA – A*STAR Partnership at international symposium on Accelerating the acceptance of next-generation sciences and their application to regulatory risk assessment in Singapore.

  15. Optimizing memory-bound SYMV kernel on GPU hardware accelerators

    KAUST Repository

    Abdelfattah, Ahmad

    2013-01-01

    Hardware accelerators are becoming ubiquitous high performance scientific computing. They are capable of delivering an unprecedented level of concurrent execution contexts. High-level programming language extensions (e.g., CUDA), profiling tools (e.g., PAPI-CUDA, CUDA Profiler) are paramount to improve productivity, while effectively exploiting the underlying hardware. We present an optimized numerical kernel for computing the symmetric matrix-vector product on nVidia Fermi GPUs. Due to its inherent memory-bound nature, this kernel is very critical in the tridiagonalization of a symmetric dense matrix, which is a preprocessing step to calculate the eigenpairs. Using a novel design to address the irregular memory accesses by hiding latency and increasing bandwidth, our preliminary asymptotic results show 3.5x and 2.5x fold speedups over the similar CUBLAS 4.0 kernel, and 7-8% and 30% fold improvement over the Matrix Algebra on GPU and Multicore Architectures (MAGMA) library in single and double precision arithmetics, respectively. © 2013 Springer-Verlag.

  16. Accelerating 3D Elastic Wave Equations on Knights Landing based Intel Xeon Phi processors

    Science.gov (United States)

    Sourouri, Mohammed; Birger Raknes, Espen

    2017-04-01

    In advanced imaging methods like reverse-time migration (RTM) and full waveform inversion (FWI) the elastic wave equation (EWE) is numerically solved many times to create the seismic image or the elastic parameter model update. Thus, it is essential to optimize the solution time for solving the EWE as this will have a major impact on the total computational cost in running RTM or FWI. From a computational point of view applications implementing EWEs are associated with two major challenges. The first challenge is the amount of memory-bound computations involved, while the second challenge is the execution of such computations over very large datasets. So far, multi-core processors have not been able to tackle these two challenges, which eventually led to the adoption of accelerators such as Graphics Processing Units (GPUs). Compared to conventional CPUs, GPUs are densely populated with many floating-point units and fast memory, a type of architecture that has proven to map well to many scientific computations. Despite its architectural advantages, full-scale adoption of accelerators has yet to materialize. First, accelerators require a significant programming effort imposed by programming models such as CUDA or OpenCL. Second, accelerators come with a limited amount of memory, which also require explicit data transfers between the CPU and the accelerator over the slow PCI bus. The second generation of the Xeon Phi processor based on the Knights Landing (KNL) architecture, promises the computational capabilities of an accelerator but require the same programming effort as traditional multi-core processors. The high computational performance is realized through many integrated cores (number of cores and tiles and memory varies with the model) organized in tiles that are connected via a 2D mesh based interconnect. In contrary to accelerators, KNL is a self-hosted system, meaning explicit data transfers over the PCI bus are no longer required. However, like most

  17. XACC - eXtreme-scale Accelerator Programming Framework

    Energy Technology Data Exchange (ETDEWEB)

    2016-11-18

    Hybrid programming models for beyond-CMOS technologies will prove critical for integrating new computing technologies alongside our existing infrastructure. Unfortunately the software infrastructure required to enable this is lacking or not available. XACC is a programming framework for extreme-scale, post-exascale accelerator architectures that integrates alongside existing conventional applications. It is a pluggable framework for programming languages developed for next-gen computing hardware architectures like quantum and neuromorphic computing. It lets computational scientists efficiently off-load classically intractable work to attached accelerators through user-friendly Kernel definitions. XACC makes post-exascale hybrid programming approachable for domain computational scientists.

  18. Accelerating Approximate Bayesian Computation with Quantile Regression: application to cosmological redshift distributions

    Science.gov (United States)

    Kacprzak, T.; Herbel, J.; Amara, A.; Réfrégier, A.

    2018-02-01

    Approximate Bayesian Computation (ABC) is a method to obtain a posterior distribution without a likelihood function, using simulations and a set of distance metrics. For that reason, it has recently been gaining popularity as an analysis tool in cosmology and astrophysics. Its drawback, however, is a slow convergence rate. We propose a novel method, which we call qABC, to accelerate ABC with Quantile Regression. In this method, we create a model of quantiles of distance measure as a function of input parameters. This model is trained on a small number of simulations and estimates which regions of the prior space are likely to be accepted into the posterior. Other regions are then immediately rejected. This procedure is then repeated as more simulations are available. We apply it to the practical problem of estimation of redshift distribution of cosmological samples, using forward modelling developed in previous work. The qABC method converges to nearly same posterior as the basic ABC. It uses, however, only 20% of the number of simulations compared to basic ABC, achieving a fivefold gain in execution time for our problem. For other problems the acceleration rate may vary; it depends on how close the prior is to the final posterior. We discuss possible improvements and extensions to this method.

  19. COMPUTATIONAL SCIENCE CENTER

    Energy Technology Data Exchange (ETDEWEB)

    DAVENPORT, J.

    2006-11-01

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to

  20. Electromagnetic design of a pos-accelerator of protons for ocular neoplasm therapy

    International Nuclear Information System (INIS)

    Rabelo, Luísa de Araújo

    2016-01-01

    Proton therapy is an effective technique in the treatment and control of cancer, which is not available in most countries. The low number of specialized centers for this type of treatment is because of the high cost of implementing and maintaining the accelerators. This study presents a model for the Electromagnetic (EM) acceleration of protons to sufficient energies for the treatment of ocular tumors. This is the scientific possibility of a compact technology that uses cyclotrons to produce radioisotopes (present in various countries) as accelerator guns via an analytical assessment of the physical parameters of the beam and a simulation of the electromagnetic equipment structures, acceleration, and movement of the proton beam using CST STUDIO® 3D 2015 (Computer Simulation Technology) software. In addition, the geometry required to provide synchronization between the acceleration and beam path was analyzed using the motion equations of the protons. The simulations show a final model that is compact and simplified as compared with the isochronic cyclotron and synchrotron (used for proton therapy). The synchronism requirements of a circular accelerator are fulfilled in this model so that in all orbits the beam has the same movement time. The extraction energy of the presented model is sufficient for the treatment of ocular tumors. This is an alternative method that could improve the quality of life for patients with ocular tumors in developing countries. Future studies will be conducted to complete the technical design presentation and evaluate the accelerated beam's interaction with neoplastic tissues. (author)

  1. Development of a common data model for scientific simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ambrosiano, J. [Los Alamos National Lab., NM (United States); Butler, D.M. [Limit Point Systems, Inc. (United States); Matarazzo, C.; Miller, M. [Lawrence Livermore National Lab., CA (United States); Schoof, L. [Sandia National Lab., Albuquerque, NM (United States)

    1999-06-01

    The problem of sharing data among scientific simulation models is a difficult and persistent one. Computational scientists employ an enormous variety of discrete approximations in modeling physical processes on computers. Problems occur when models based on different representations are required to exchange data with one another, or with some other software package. Within the DOE`s Accelerated Strategic Computing Initiative (ASCI), a cross-disciplinary group called the Data Models and Formats (DMF) group, has been working to develop a common data model. The current model is comprised of several layers of increasing semantic complexity. One of these layers is an abstract model based on set theory and topology called the fiber bundle kernel (FBK). This layer provides the flexibility needed to describe a wide range of mesh-approximated functions as well as other entities. This paper briefly describes the ASCI common data model, its mathematical basis, and ASCI prototype development. These prototypes include an object-oriented data management library developed at Los Alamos called the Common Data Model Library or CDMlib, the Vector Bundle API from the Lawrence Livermore Laboratory, and the DMF API from Sandia National Laboratory.

  2. Good enough practices in scientific computing.

    Science.gov (United States)

    Wilson, Greg; Bryan, Jennifer; Cranston, Karen; Kitzes, Justin; Nederbragt, Lex; Teal, Tracy K

    2017-06-01

    Computers are now essential in all branches of science, but most researchers are never taught the equivalent of basic lab skills for research computing. As a result, data can get lost, analyses can take much longer than necessary, and researchers are limited in how effectively they can work with software and data. Computing workflows need to follow the same practices as lab projects and notebooks, with organized data, documented steps, and the project structured for reproducibility, but researchers new to computing often don't know where to start. This paper presents a set of good computing practices that every researcher can adopt, regardless of their current level of computational skill. These practices, which encompass data management, programming, collaborating with colleagues, organizing projects, tracking work, and writing manuscripts, are drawn from a wide variety of published sources from our daily lives and from our work with volunteer organizations that have delivered workshops to over 11,000 people since 2010.

  3. A LEGO paradigm for virtual accelerator concept

    International Nuclear Information System (INIS)

    Andrianov, S.; Ivanov, A.; Podzyvalov, E.

    2012-01-01

    The paper considers basic features of a Virtual Accelerator concept based on LEGO paradigm. This concept involves three types of components: different mathematical models for accelerator design problems, integrated beam simulation packages (i. e. COSY, MAD, OptiM and others), and a special class of virtual feedback instruments similar to real control systems (EPICS). All of these components should inter-operate for more complete analysis of control systems and increased fault tolerance. The Virtual Accelerator is an information and computing environment which provides a framework for analysis based on these components that can be combined in different ways. Corresponding distributed computing services establish interaction between mathematical models and low level control system. The general idea of the software implementation is based on the Service-Oriented Architecture (SOA) that allows using cloud computing technology and enables remote access to the information and computing resources. The Virtual Accelerator allows a designer to combine powerful instruments for modeling beam dynamics in a friendly way including both self-developed and well-known packages. In the scope of this concept the following is also proposed: the control system identification, analysis and result verification, visualization as well as virtual feedback for beam line operation. The architecture of the Virtual Accelerator system itself and results of beam dynamics studies are presented. (authors)

  4. CEBAF [Continuous Electron Beam Accelerator Facility] scientific program

    International Nuclear Information System (INIS)

    Gross, F.

    1986-01-01

    The principal scientific mission of the Continuous Electron Beam Facility (CEBAF) is to study collective phenomena in cold (or normal) nucler matter in order to understand the structure and behavior of macroscopic systems constructed from nuclei. This document discusses in broad popular terms those issues which the CEBAF experimental and theoretical program are designed to address. Specific experimental programs currently planned for CEBAF are also reivewed. 35 refs., 19 figs

  5. PS3 CELL Development for Scientific Computation and Research

    Science.gov (United States)

    Christiansen, M.; Sevre, E.; Wang, S. M.; Yuen, D. A.; Liu, S.; Lyness, M. D.; Broten, M.

    2007-12-01

    The Cell processor is one of the most powerful processors on the market, and researchers in the earth sciences may find its parallel architecture to be very useful. A cell processor, with 7 cores, can easily be obtained for experimentation by purchasing a PlayStation 3 (PS3) and installing linux and the IBM SDK. Each core of the PS3 is capable of 25 GFLOPS giving a potential limit of 150 GFLOPS when using all 6 SPUs (synergistic processing units) by using vectorized algorithms. We have used the Cell's computational power to create a program which takes simulated tsunami datasets, parses them, and returns a colorized height field image using ray casting techniques. As expected, the time required to create an image is inversely proportional to the number of SPUs used. We believe that this trend will continue when multiple PS3s are chained using OpenMP functionality and are in the process of researching this. By using the Cell to visualize tsunami data, we have found that its greatest feature is its power. This fact entwines well with the needs of the scientific community where the limiting factor is time. Any algorithm, such as the heat equation, that can be subdivided into multiple parts can take advantage of the PS3 Cell's ability to split the computations across the 6 SPUs reducing required run time by one sixth. Further vectorization of the code can allow for 4 simultanious floating point operations by using the SIMD (single instruction multiple data) capabilities of the SPU increasing efficiency 24 times.

  6. Using Cloud-Computing Applications to Support Collaborative Scientific Inquiry: Examining Pre-Service Teachers' Perceived Barriers to Integration

    Science.gov (United States)

    Donna, Joel D.; Miller, Brant G.

    2013-01-01

    Technology plays a crucial role in facilitating collaboration within the scientific community. Cloud-computing applications, such as Google Drive, can be used to model such collaboration and support inquiry within the secondary science classroom. Little is known about pre-service teachers' beliefs related to the envisioned use of collaborative,…

  7. Radiological protection at particle accelerators: An overview

    International Nuclear Information System (INIS)

    Thomas, R.H.

    1991-01-01

    Radiological protection began with particle accelerators. Many of the concerns in the health physics profession today were discovered at accelerator laboratories. Since the mid-1940s, our understanding has progressed through seven stages: observation of high radiation levels; shielding; development of dosimetric techniques; studies of induced activity and environmental impact; legislative and regulatory concerns; and disposal. The technical and scientific aspects of accelerator radiation safety are well in hand. In the US, there is an urgent need to move away from a ''best available technology'' philosophy to risk-based health protection standards. The newer accelerators will present interesting radiological protection issues, including copious muon production and high LET (neutron) environments

  8. 2001 Snowmass Accelerator R and D Report

    Energy Technology Data Exchange (ETDEWEB)

    Chao, Alex W.

    2002-08-30

    The purpose of this report is to provide a perspective on future accelerator projects, and to identify the R&D activities necessary to prepare for these projects. The report summarizes the conclusions of accelerator studies made during the 2001 Snowmass Summer Study on the Future of Particle Physics. In doing so, it serves as a summary of the opinions on accelerator R&D expressed by the scientific community as it looks towards the next few decades. The main technical content is provided by the Executive Summaries of each of the fifteen accelerator Working Groups. These Working Group Executive Summaries form an integral part of this report.

  9. 2001 Snowmass Accelerator R and D Report

    International Nuclear Information System (INIS)

    Chao, Alex W.

    2002-01-01

    The purpose of this report is to provide a perspective on future accelerator projects, and to identify the R and D activities necessary to prepare for these projects. The report summarizes the conclusions of accelerator studies made during the 2001 Snowmass Summer Study on the Future of Particle Physics. In doing so, it serves as a summary of the opinions on accelerator R and D expressed by the scientific community as it looks towards the next few decades. The main technical content is provided by the Executive Summaries of each of the fifteen accelerator Working Groups. These Working Group Executive Summaries form an integral part of this report

  10. Computer Based Dose Control System on Linear Accelerator

    International Nuclear Information System (INIS)

    Taxwim; Djoko-SP; Widi-Setiawan; Agus-Budi Wiyatna

    2000-01-01

    The accelerator technology has been used for radio therapy. DokterKaryadi Hospital in Semarang use electron or X-ray linear accelerator (Linac)for cancer therapy. One of the control parameter of linear accelerator isdose rate. It is particle current or amount of photon rate to the target. Thecontrol of dose rate in linac have been done by adjusting repetition rate ofanode pulse train of electron source. Presently the control is stillproportional control. To enhance the quality of the control result (minimalstationer error, velocity and stability), the dose control system has beendesigned by using the PID (Proportional Integral Differential) controlalgorithm and the derivation of transfer function of control object.Implementation of PID algorithm control system is done by giving an input ofdose error (the different between output dose and dose rate set point). Theoutput of control system is used for correction of repetition rate set pointfrom pulse train of electron source anode. (author)

  11. Shielding Aspects of Accelerators, Targets and Irradiation Facilities - SATIF-11 Workshop Proceedings Report

    International Nuclear Information System (INIS)

    2013-01-01

    Particle accelerators have evolved over the last decades from simple devices to powerful machines. In recent years, new technological and research applications have helped to define requirements while the number of accelerator facilities in operation, being commissioned, designed or planned has grown significantly. Their parameters, which include the beam energy, currents and intensities, and target composition, can vary widely, giving rise to new radiation shielding issues and challenges. Particle accelerators must be operated in safe ways to protect operators, the public and the environment. As the design and use of these facilities evolve, so must the analytical methods used in the safety analyses. These workshop proceedings review the state of the art in radiation shielding of accelerator facilities and irradiation targets. They also evaluate progress in the development of modelling methods used to assess the effectiveness of such shielding as part of safety analyses. The transport of radiation through shielding materials is a major consideration in the safety design studies of nuclear power plants, and the modelling techniques used may be applied to many other types of scientific and technological facilities. Accelerator and irradiation facilities represent a key capability in R and D, medical and industrial infrastructures, and they can be used in a wide range of scientific, medical and industrial applications. High-energy ion accelerators, for example, are now used not only in fundamental research, such as the search for new super-heavy nuclei, but also for therapy as part of cancer treatment. While the energy of the incident particles on the shielding of these facilities may be much higher than those found in nuclear power plants, much of the physics associated with the behaviour of the secondary particles produced is similar, as are the computer modelling techniques used to quantify key safety design parameters, such as radiation dose and activation levels

  12. Computer-supported analysis of scientific measurements

    NARCIS (Netherlands)

    de Jong, Hidde

    1998-01-01

    In the past decade, large-scale databases and knowledge bases have become available to researchers working in a range of scientific disciplines. In many cases these databases and knowledge bases contain measurements of properties of physical objects which have been obtained in experiments or at

  13. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators

    Science.gov (United States)

    Yim, Keun Soo

    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of

  14. NIH/NSF accelerate biomedical research innovations

    Science.gov (United States)

    A collaboration between the National Science Foundation and the National Institutes of Health will give NIH-funded researchers training to help them evaluate their scientific discoveries for commercial potential, with the aim of accelerating biomedical in

  15. Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices: Design Considerations

    Directory of Open Access Journals (Sweden)

    Tayfun Gokmen

    2016-07-01

    Full Text Available In recent years, deep neural networks (DNN have demonstrated significant business impact in large scale analysis and classification tasks such as speech recognition, visual object detection, pattern extraction, etc. Training of large DNNs, however, is universally considered as time consuming and computationally intensive task that demands datacenter-scale computational resources recruited for many days. Here we propose a concept of resistive processing unit (RPU devices that can potentially accelerate DNN training by orders of magnitude while using much less power. The proposed RPU device can store and update the weight values locally thus minimizing data movement during training and allowing to fully exploit the locality and the parallelism of the training algorithm. We evaluate the effect of various RPU device features/non-idealities and system parameters on performance in order to derive the device and system level specifications for implementation of an accelerator chip for DNN training in a realistic CMOS-compatible technology. For large DNNs with about 1 billion weights this massively parallel RPU architecture can achieve acceleration factors of 30,000X compared to state-of-the-art microprocessors while providing power efficiency of 84,000 GigaOps/s/W. Problems that currently require days of training on a datacenter-size cluster with thousands of machines can be addressed within hours on a single RPU accelerator. A system consisting of a cluster of RPU accelerators will be able to tackle Big Data problems with trillions of parameters that is impossible to address today like, for example, natural speech recognition and translation between all world languages, real-time analytics on large streams of business and scientific data, integration and analysis of multimodal sensory data flows from a massive number of IoT (Internet of Things sensors.

  16. Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices: Design Considerations.

    Science.gov (United States)

    Gokmen, Tayfun; Vlasov, Yurii

    2016-01-01

    In recent years, deep neural networks (DNN) have demonstrated significant business impact in large scale analysis and classification tasks such as speech recognition, visual object detection, pattern extraction, etc. Training of large DNNs, however, is universally considered as time consuming and computationally intensive task that demands datacenter-scale computational resources recruited for many days. Here we propose a concept of resistive processing unit (RPU) devices that can potentially accelerate DNN training by orders of magnitude while using much less power. The proposed RPU device can store and update the weight values locally thus minimizing data movement during training and allowing to fully exploit the locality and the parallelism of the training algorithm. We evaluate the effect of various RPU device features/non-idealities and system parameters on performance in order to derive the device and system level specifications for implementation of an accelerator chip for DNN training in a realistic CMOS-compatible technology. For large DNNs with about 1 billion weights this massively parallel RPU architecture can achieve acceleration factors of 30, 000 × compared to state-of-the-art microprocessors while providing power efficiency of 84, 000 GigaOps∕s∕W. Problems that currently require days of training on a datacenter-size cluster with thousands of machines can be addressed within hours on a single RPU accelerator. A system consisting of a cluster of RPU accelerators will be able to tackle Big Data problems with trillions of parameters that is impossible to address today like, for example, natural speech recognition and translation between all world languages, real-time analytics on large streams of business and scientific data, integration, and analysis of multimodal sensory data flows from a massive number of IoT (Internet of Things) sensors.

  17. Accelerator ampersand Fusion Research Division: 1993 Summary of activities

    International Nuclear Information System (INIS)

    Chew, J.

    1994-04-01

    The Accelerator and Fusion Research Division (AFRD) is not only one of the largest scientific divisions at LBL, but also the one of the most diverse. Major efforts include: (1) investigations in both inertial and magnetic fusion energy; (2) operation of the Advanced Light Source, a state-of-the-art synchrotron radiation facility; (3) exploratory investigations of novel radiation sources and colliders; (4) research and development in superconducting magnets for accelerators and other scientific and industrial applications; and (5) ion beam technology development for nuclear physics and for industrial and biomedical applications. Each of these topics is discussed in detail in this book

  18. Accelerator Toolbox for MATLAB

    International Nuclear Information System (INIS)

    Terebilo, Andrei

    2001-01-01

    This paper introduces Accelerator Toolbox (AT)--a collection of tools to model particle accelerators and beam transport lines in the MATLAB environment. At SSRL, it has become the modeling code of choice for the ongoing design and future operation of the SPEAR 3 synchrotron light source. AT was designed to take advantage of power and simplicity of MATLAB--commercially developed environment for technical computing and visualization. Many examples in this paper illustrate the advantages of the AT approach and contrast it with existing accelerator code frameworks

  19. X-ray beam hardening correction for measuring density in linear accelerator industrial computed tomography

    International Nuclear Information System (INIS)

    Zhou Rifeng; Wang Jue; Chen Weimin

    2009-01-01

    Due to X-ray attenuation being approximately proportional to material density, it is possible to measure the inner density through Industrial Computed Tomography (ICT) images accurately. In practice, however, a number of factors including the non-linear effects of beam hardening and diffuse scattered radiation complicate the quantitative measurement of density variations in materials. This paper is based on the linearization method of beam hardening correction, and uses polynomial fitting coefficient which is obtained by the curvature of iron polychromatic beam data to fit other materials. Through theoretical deduction, the paper proves that the density measure error is less than 2% if using pre-filters to make the spectrum of linear accelerator range mainly 0.3 MeV to 3 MeV. Experiment had been set up at an ICT system with a 9 MeV electron linear accelerator. The result is satisfactory. This technique makes the beam hardening correction easy and simple, and it is valuable for measuring the ICT density and making use of the CT images to recognize materials. (authors)

  20. GPU accelerated flow solver for direct numerical simulation of turbulent flows

    Energy Technology Data Exchange (ETDEWEB)

    Salvadore, Francesco [CASPUR – via dei Tizii 6/b, 00185 Rome (Italy); Bernardini, Matteo, E-mail: matteo.bernardini@uniroma1.it [Department of Mechanical and Aerospace Engineering, University of Rome ‘La Sapienza’ – via Eudossiana 18, 00184 Rome (Italy); Botti, Michela [CASPUR – via dei Tizii 6/b, 00185 Rome (Italy)

    2013-02-15

    Graphical processing units (GPUs), characterized by significant computing performance, are nowadays very appealing for the solution of computationally demanding tasks in a wide variety of scientific applications. However, to run on GPUs, existing codes need to be ported and optimized, a procedure which is not yet standardized and may require non trivial efforts, even to high-performance computing specialists. In the present paper we accurately describe the porting to CUDA (Compute Unified Device Architecture) of a finite-difference compressible Navier–Stokes solver, suitable for direct numerical simulation (DNS) of turbulent flows. Porting and validation processes are illustrated in detail, with emphasis on computational strategies and techniques that can be applied to overcome typical bottlenecks arising from the porting of common computational fluid dynamics solvers. We demonstrate that a careful optimization work is crucial to get the highest performance from GPU accelerators. The results show that the overall speedup of one NVIDIA Tesla S2070 GPU is approximately 22 compared with one AMD Opteron 2352 Barcelona chip and 11 compared with one Intel Xeon X5650 Westmere core. The potential of GPU devices in the simulation of unsteady three-dimensional turbulent flows is proved by performing a DNS of a spatially evolving compressible mixing layer.

  1. Certification of version 1.2 of the PORFLO-3 code for the WHC scientific and engineering computational center

    International Nuclear Information System (INIS)

    Kline, N.W.

    1994-01-01

    Version 1.2 of the PORFLO-3 Code has migrated from the Hanford Cray computer to workstations in the WHC Scientific and Engineering Computational Center. The workstation-based configuration and acceptance testing are inherited from the CRAY-based configuration. The purpose of this report is to document differences in the new configuration as compared to the parent Cray configuration, and summarize some of the acceptance test results which have shown that the migrated code is functioning correctly in the new environment

  2. Optimizing accelerator technology

    CERN Multimedia

    Katarina Anthony

    2012-01-01

    A new EU-funded research and training network, oPAC, is bringing together 22 universities, research centres and industry partners to optimize particle accelerator technology. CERN is one of the network’s main partners and will host 5 early-stage researchers in the BE department.   A diamond detector that will be used for novel beam diagnostics applications in the oPAC project based at CIVIDEC. (Image courtesy of CIVIDEC.) As one of the largest Marie Curie Initial Training Networks ever funded by the EU – to the tune of €6 million – oPAC extends well beyond the particle physics community. “Accelerator physics has become integral to research in almost every scientific discipline – be it biology and life science, medicine, geology and material science, or fundamental physics,” explains Carsten P. Welsch, oPAC co-ordinator based at the University of Liverpool. “By optimizing the operation of accelerators, all of these...

  3. Impact of configuration management system of computer center on support of scientific projects throughout their lifecycle

    International Nuclear Information System (INIS)

    Bogdanov, A.V.; Yuzhanin, N.V.; Zolotarev, V.I.; Ezhakova, T.R.

    2017-01-01

    In this article the problem of scientific projects support throughout their lifecycle in the computer center is considered in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of a computer center. In view of strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of research projects support, the influence of the Configuration Management system is reviewed and development of the corresponding elements of the system is described in the present paper.

  4. Physics and technical development of accelerators; Physique et technique des accelerateurs

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    About 90 registered participants delivered more than 40 scientific papers. A great part of these presentations were of general interest about running projects such as CIME accelerator at Ganil, IPHI (high intensity proton injector), ESRF (European source of synchrotron radiation), LHC (large hadron collider), ELYSE accelerator at Orsay, AIRIX, and VIVITRON tandem accelerator. Other presentations highlighted the latest technological developments of accelerator components: superconducting cavities, power klystrons, high current injectors..

  5. The evolution of high energy accelerators

    International Nuclear Information System (INIS)

    Courant, E.D.

    1989-10-01

    In this lecture I would like to trace how high energy particle accelerators have grown from tools used for esoteric small-scale experiments to gigantic projects being hotly debated in Congress as well as in the scientific community

  6. [Technical Gap of Chinese Medical Accelerator and Its Development Path].

    Science.gov (United States)

    Tian, Xinzhi

    2017-11-30

    With the reform and opening up the tide through nearly four decades of development, our medical accelerator business isfacing new era demands now, in this new historical opportunity in front of the younger generation of medical accelerator staff must assume the older generation of scientific research personnel are different of the historical responsibility. Based on the development of the predecessors, we try to analyze the current situation of the domestic accelerator, establish the new development ideas of the domestic medical accelerator, and directly face and solve the dilemma facing the development of the domestic accelerator.

  7. I - Template Metaprogramming for Massively Parallel Scientific Computing - Expression Templates

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Large scale scientific computing raises questions on different levels ranging from the fomulation of the problems to the choice of the best algorithms and their implementation for a specific platform. There are similarities in these different topics that can be exploited by modern-style C++ template metaprogramming techniques to produce readable, maintainable and generic code. Traditional low-level code tend to be fast but platform-dependent, and it obfuscates the meaning of the algorithm. On the other hand, object-oriented approach is nice to read, but may come with an inherent performance penalty. These lectures aim to present he basics of the Expression Template (ET) idiom which allows us to keep the object-oriented approach without sacrificing performance. We will in particular show to to enhance ET to include SIMD vectorization. We will then introduce techniques for abstracting iteration, and introduce thread-level parallelism for use in heavy data-centric loads. We will show to to apply these methods i...

  8. Checkpointing for a hybrid computing node

    Science.gov (United States)

    Cher, Chen-Yong

    2016-03-08

    According to an aspect, a method for checkpointing in a hybrid computing node includes executing a task in a processing accelerator of the hybrid computing node. A checkpoint is created in a local memory of the processing accelerator. The checkpoint includes state data to restart execution of the task in the processing accelerator upon a restart operation. Execution of the task is resumed in the processing accelerator after creating the checkpoint. The state data of the checkpoint are transferred from the processing accelerator to a main processor of the hybrid computing node while the processing accelerator is executing the task.

  9. The Centre of High-Performance Scientific Computing, Geoverbund, ABC/J - Geosciences enabled by HPSC

    Science.gov (United States)

    Kollet, Stefan; Görgen, Klaus; Vereecken, Harry; Gasper, Fabian; Hendricks-Franssen, Harrie-Jan; Keune, Jessica; Kulkarni, Ketan; Kurtz, Wolfgang; Sharples, Wendy; Shrestha, Prabhakar; Simmer, Clemens; Sulis, Mauro; Vanderborght, Jan

    2016-04-01

    The Centre of High-Performance Scientific Computing (HPSC TerrSys) was founded 2011 to establish a centre of competence in high-performance scientific computing in terrestrial systems and the geosciences enabling fundamental and applied geoscientific research in the Geoverbund ABC/J (geoscientfic research alliance of the Universities of Aachen, Cologne, Bonn and the Research Centre Jülich, Germany). The specific goals of HPSC TerrSys are to achieve relevance at the national and international level in (i) the development and application of HPSC technologies in the geoscientific community; (ii) student education; (iii) HPSC services and support also to the wider geoscientific community; and in (iv) the industry and public sectors via e.g., useful applications and data products. A key feature of HPSC TerrSys is the Simulation Laboratory Terrestrial Systems, which is located at the Jülich Supercomputing Centre (JSC) and provides extensive capabilities with respect to porting, profiling, tuning and performance monitoring of geoscientific software in JSC's supercomputing environment. We will present a summary of success stories of HPSC applications including integrated terrestrial model development, parallel profiling and its application from watersheds to the continent; massively parallel data assimilation using physics-based models and ensemble methods; quasi-operational terrestrial water and energy monitoring; and convection permitting climate simulations over Europe. The success stories stress the need for a formalized education of students in the application of HPSC technologies in future.

  10. Recent Improvements to CHEF, a Framework for Accelerator Computations

    Energy Technology Data Exchange (ETDEWEB)

    Ostiguy, J.-F.; Michelotti, L.P.; /Fermilab

    2009-05-01

    CHEF is body of software dedicated to accelerator related computations. It consists of a hierarchical set of libraries and a stand-alone application based on the latter. The implementation language is C++; the code makes extensive use of templates and modern idioms such as iterators, smart pointers and generalized function objects. CHEF has been described in a few contributions at previous conferences. In this paper, we provide an overview and discuss recent improvements. Formally, CHEF refers to two distinct but related things: (1) a set of class libraries; and (2) a stand-alone application based on these libraries. The application makes use of and exposes a subset of the capabilities provided by the libraries. CHEF has its ancestry in efforts started in the early nineties. At that time, A. Dragt, E. Forest [2] and others showed that ring dynamics can be formulated in a way that puts maps rather than Hamiltonians, into a central role. Automatic differentiation (AD) techniques, which were just coming of age, were a natural fit in a context where maps are represented by their Taylor approximations. The initial vision, which CHEF carried over, was to develop a code that (1) concurrently supports conventional tracking, linear and non-linear map-based techniques (2) avoids 'hardwired' approximations that are not under user control (3) provides building blocks for applications. C++ was adopted as the implementation language because of its comprehensive support for operator overloading and the equal status it confers to built-in and user-defined data types. It should be mentioned that acceptance of AD techniques in accelerator science owes much to the pioneering work of Berz [1] who implemented--in fortran--the first production quality AD engine (the foundation for the code COSY). Nowadays other engines are available, but few are native C++ implementations. Although AD engines and map based techniques are making their way into more traditional codes e.g. [5

  11. Recent Improvements to CHEF, a Framework for Accelerator Computations

    International Nuclear Information System (INIS)

    Ostiguy, J.-F.; Michelotti, L.P.

    2009-01-01

    CHEF is body of software dedicated to accelerator related computations. It consists of a hierarchical set of libraries and a stand-alone application based on the latter. The implementation language is C++; the code makes extensive use of templates and modern idioms such as iterators, smart pointers and generalized function objects. CHEF has been described in a few contributions at previous conferences. In this paper, we provide an overview and discuss recent improvements. Formally, CHEF refers to two distinct but related things: (1) a set of class libraries; and (2) a stand-alone application based on these libraries. The application makes use of and exposes a subset of the capabilities provided by the libraries. CHEF has its ancestry in efforts started in the early nineties. At that time, A. Dragt, E. Forest [2] and others showed that ring dynamics can be formulated in a way that puts maps rather than Hamiltonians, into a central role. Automatic differentiation (AD) techniques, which were just coming of age, were a natural fit in a context where maps are represented by their Taylor approximations. The initial vision, which CHEF carried over, was to develop a code that (1) concurrently supports conventional tracking, linear and non-linear map-based techniques (2) avoids 'hardwired' approximations that are not under user control (3) provides building blocks for applications. C++ was adopted as the implementation language because of its comprehensive support for operator overloading and the equal status it confers to built-in and user-defined data types. It should be mentioned that acceptance of AD techniques in accelerator science owes much to the pioneering work of Berz [1] who implemented--in fortran--the first production quality AD engine (the foundation for the code COSY). Nowadays other engines are available, but few are native C++ implementations. Although AD engines and map based techniques are making their way into more traditional codes e.g. [5], it is also

  12. Applications of laser-driven particle acceleration

    CERN Document Server

    Parodi, Katia; Schreiber, Jorg

    2018-01-01

    The first book of its kind to highlight the unique capabilities of laser-driven acceleration and its diverse potential, Applications of Laser-Driven Particle Acceleration presents the basic understanding of acceleration concepts and envisioned prospects for selected applications. As the main focus, this new book explores exciting and diverse application possibilities, with emphasis on those uniquely enabled by the laser driver that can also be meaningful and realistic for potential users. A key aim of the book is to inform multiple, interdisciplinary research communities of the new possibilities available and to inspire them to engage with laser-driven acceleration, further motivating and advancing this developing field. Material is presented in a thorough yet accessible manner, making it a valuable reference text for general scientific and engineering researchers who are not necessarily subject matter experts. Applications of Laser-Driven Particle Acceleration is edited by Professors Paul R. Bolton, Katia ...

  13. The auroral electron accelerator

    International Nuclear Information System (INIS)

    Bryant, D.A.; Hall, D.S.

    1989-01-01

    A model of the auroral electron acceleration process is presented in which the electrons are accelerated resonantly by lower-hybrid waves. The essentially stochastic acceleration process is approximated for the purposes of computation by a deterministic model involving an empirically derived energy transfer function. The empirical function, which is consistent with all that is known of electron energization by lower-hybrid waves, allows many, possibly all, observed features of the electron distribution to be reproduced. It is suggested that the process occurs widely in both space and laboratory plasmas. (author)

  14. A Modular Environment for Geophysical Inversion and Run-time Autotuning using Heterogeneous Computing Systems

    Science.gov (United States)

    Myre, Joseph M.

    Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that

  15. Advanced computations in plasma physics

    International Nuclear Information System (INIS)

    Tang, W.M.

    2002-01-01

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to

  16. Hacking control systems, switching… accelerators off?

    CERN Multimedia

    Computer Security Team

    2013-01-01

    In response to our article in the last Bulletin, we received the following comment: “Wasn’t Stuxnet designed to stop the Iranian nuclear programme? Why then all this noise with regard to CERN accelerators? Don’t you realize that ‘computer security’ is not the raison d'être of CERN?”. Thank you for this golden opportunity to delve into this issue.   Given the sophistication of Stuxnet, it might have been hard to detect such a targeted attack against CERN, if at all. But this is not the point. There are much simpler risks for our accelerator complex and infrastructure. And, while “‘computer security’ is [indeed] not the raison d' être”, it is our collective responsibility to keep this risk at bay.   Examples? Just think of a simple computer virus infecting Windows-based control PCs connected to the accelerator network (the Technical Network, &ld...

  17. Cyclotron method for heavy ion acceleration

    International Nuclear Information System (INIS)

    Gikal, B.N.; Gul'bekyan, G.G.; Kutner, V.B.; Oganesyan, R.Ts.

    1984-01-01

    Studies on heavy ion beams in a wide range of masses (up to uranium) and energies disclose essential potential opportunities for solution of both fundamental scientific and significant economical problems. A cyclotron method for heavy ion acceleration is considered. Development of low and medium energy heavy ion accelerators is revealed. The design of a complex comprising two isochronous cyclotrons which is planned to be constrdcted 1n the JINR is described. The cyclotron complex includes the U-400 and the U-400 M cyclotrons and it is intended for acceleration of both 35-20 MeV/nucleon superheavy ions such as Xe-U and 120 MeV/nucleon light ions. Certain systems of the accelerators are described. Prospects of the U-400 and the U-400 M development are displayed

  18. Flexusi Interface Builder For Computer Based Accelerator Monitoring And Control System

    CERN Document Server

    Kurakin, V G; Kurakin, P V

    2004-01-01

    We have developed computer code for any desired graphics user interface designing for monitoring and control system at the executable level. This means that operator can build up measurement console consisting of virtual devices before or even during real experiment without recompiling source file. Such functionality results in number of advantages comparing with traditional programming. First of all any risk disappears to introduce bug into source code. Another important thing is the fact the both program developers and operator staff do not interface in developing ultimate product (measurement console). Thus, small team without detailed project can design even very complicated monitoring and control system. For the reason mentioned below, approach suggested is especially helpful for large complexes to be monitored and control, accelerator being among them. The program code consists of several modules, responsible for data acquisition, control and representation. Borland C++ Builder technologies based on VCL...

  19. Sudden Cardiac Risk Stratification with Electrocardiographic Indices - A Review on Computational Processing, Technology Transfer, and Scientific Evidence

    Directory of Open Access Journals (Sweden)

    Francisco Javier eGimeno-Blanes

    2016-03-01

    Full Text Available Great effort has been devoted in recent years to the development of sudden cardiac risk predictors as a function of electric cardiac signals, mainly obtained from the electrocardiogram (ECG analysis. But these prediction techniques are still seldom used in clinical practice, partly due to its limited diagnostic accuracy and to the lack of consensus about the appropriate computational signal processing implementation. This paper addresses a three-fold approach, based on ECG indexes, to structure this review on sudden cardiac risk stratification. First, throughout the computational techniques that had been widely proposed for obtaining these indexes in technical literature. Second, over the scientific evidence, that although is supported by observational clinical studies, they are not always representative enough. And third, via the limited technology transfer of academy-accepted algorithms, requiring further meditation for future systems. We focus on three families of ECG derived indexes which are tackled from the aforementioned viewpoints, namely, heart rate turbulence, heart rate variability, and T-wave alternans. In terms of computational algorithms, we still need clearer scientific evidence, standardizing, and benchmarking, siting on advanced algorithms applied over large and representative datasets. New scenarios like electronic health recordings, big data, long-term monitoring, and cloud databases, will eventually open new frameworks to foresee suitable new paradigms in the near future.

  20. Automatic generation of application specific FPGA multicore accelerators

    DEFF Research Database (Denmark)

    Hindborg, Andreas Erik; Schleuniger, Pascal; Jensen, Nicklas Bo

    2014-01-01

    High performance computing systems make increasing use of hardware accelerators to improve performance and power properties. For large high-performance FPGAs to be successfully integrated in such computing systems, methods to raise the abstraction level of FPGA programming are required...... to identify optimal performance energy trade-offs points for a multicore based FPGA accelerator....

  1. ORNL 25 MV tandem accelerator control system

    International Nuclear Information System (INIS)

    Juras, R.C.; Biggerstaff, J.A.; Hoglund, D.E.

    1985-01-01

    The CAMAC-based control system for the 25 MV tandem electrostatic accelerator of the Holifield Heavy Ion Research Facility at Oak Ridge National Laboratory (ORNL) was specified by ORNL and built by the National Electrostatics Corporation. Two Perkin-Elmer 32-bit minicomputers are used in the system, a message switching computer and a supervisory computer. The message switching computer transmits and receives control information on six serial highways. This computer shares memory with the supervisory computer. Operator consoles are located on a serial highway; control is by means of a console CRT, trackball, and assignable shaft encoders and meters. Two identical consoles operate simultaneously: one is located in the tandem control room; the other is located in the cyclotron control room to facilitate operation during injection of tandem beams into the cyclotron or when beam lines under control of the cyclotron control system are used. The supervisory computer is used for accelerator parameter setup calculations, actual accelerator setup for new beams based on scaled, recorded parameters from previously run beams, and various other functions. Nearly seven years of control system operation and improvements will be discussed

  2. The Computer Program LIAR for Beam Dynamics Calculations in Linear Accelerators

    International Nuclear Information System (INIS)

    Assmann, R.W.; Adolphsen, C.; Bane, K.; Raubenheimer, T.O.; Siemann, R.H.; Thompson, K.

    2011-01-01

    Linear accelerators are the central components of the proposed next generation of linear colliders. They need to provide acceleration of up to 750 GeV per beam while maintaining very small normalized emittances. Standard simulation programs, mainly developed for storage rings, do not meet the specific requirements for high energy linear accelerators. We present a new program LIAR ('LInear Accelerator Research code') that includes wakefield effects, a 6D coupled beam description, specific optimization algorithms and other advanced features. Its modular structure allows to use and to extend it easily for different purposes. The program is available for UNIX workstations and Windows PC's. It can be applied to a broad range of accelerators. We present examples of simulations for SLC and NLC.

  3. Frontiers of massively parallel scientific computation

    International Nuclear Information System (INIS)

    Fischer, J.R.

    1987-07-01

    Practical applications using massively parallel computer hardware first appeared during the 1980s. Their development was motivated by the need for computing power orders of magnitude beyond that available today for tasks such as numerical simulation of complex physical and biological processes, generation of interactive visual displays, satellite image analysis, and knowledge based systems. Representative of the first generation of this new class of computers is the Massively Parallel Processor (MPP). A team of scientists was provided the opportunity to test and implement their algorithms on the MPP. The first results are presented. The research spans a broad variety of applications including Earth sciences, physics, signal and image processing, computer science, and graphics. The performance of the MPP was very good. Results obtained using the Connection Machine and the Distributed Array Processor (DAP) are presented

  4. Special scientific programme on use of high energy accelerators for transmutation of actinides and power production

    International Nuclear Information System (INIS)

    1994-09-01

    Various techniques for the transmutation of radioactive waste through the use of high energy accelerators are reviewed and discussed. In particular, the present publication contains presentations on (i) requirements and the technical possibilities for the transmutation of long-lived radionuclides (background paper); (ii) high energy particle accelerators for bulk transformation of elements and energy generation; (iii) the resolution of nuclear energy issues using accelerator-driven technology; (iv) the use of proton accelerators for the transmutation of actinides and power production; (v) the coupling of an accelerator to a subcritical fission reactor (with a view on its potential impact on waste transmutation); (vi) research and development of accelerator-based transmutation technology at JAERI (Japan); and (vii) questions and problems with regard to accelerator-driven nuclear power and transmutation facilities. Refs, figs and tabs

  5. Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment

    Science.gov (United States)

    Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.

    2013-12-01

    Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a

  6. Computing at the leading edge: Research in the energy sciences

    Energy Technology Data Exchange (ETDEWEB)

    Mirin, A.A.; Van Dyke, P.T. [eds.

    1994-02-01

    The purpose of this publication is to highlight selected scientific challenges that have been undertaken by the DOE Energy Research community. The high quality of the research reflected in these contributions underscores the growing importance both to the Grand Challenge scientific efforts sponsored by DOE and of the related supporting technologies that the National Energy Research Supercomputer Center (NERSC) and other facilities are able to provide. The continued improvement of the computing resources available to DOE scientists is prerequisite to ensuring their future progress in solving the Grand Challenges. Titles of articles included in this publication include: the numerical tokamak project; static and animated molecular views of a tumorigenic chemical bound to DNA; toward a high-performance climate systems model; modeling molecular processes in the environment; lattice Boltzmann models for flow in porous media; parallel algorithms for modeling superconductors; parallel computing at the Superconducting Super Collider Laboratory; the advanced combustion modeling environment; adaptive methodologies for computational fluid dynamics; lattice simulations of quantum chromodynamics; simulating high-intensity charged-particle beams for the design of high-power accelerators; electronic structure and phase stability of random alloys.

  7. Computing at the leading edge: Research in the energy sciences

    International Nuclear Information System (INIS)

    Mirin, A.A.; Van Dyke, P.T.

    1994-01-01

    The purpose of this publication is to highlight selected scientific challenges that have been undertaken by the DOE Energy Research community. The high quality of the research reflected in these contributions underscores the growing importance both to the Grand Challenge scientific efforts sponsored by DOE and of the related supporting technologies that the National Energy Research Supercomputer Center (NERSC) and other facilities are able to provide. The continued improvement of the computing resources available to DOE scientists is prerequisite to ensuring their future progress in solving the Grand Challenges. Titles of articles included in this publication include: the numerical tokamak project; static and animated molecular views of a tumorigenic chemical bound to DNA; toward a high-performance climate systems model; modeling molecular processes in the environment; lattice Boltzmann models for flow in porous media; parallel algorithms for modeling superconductors; parallel computing at the Superconducting Super Collider Laboratory; the advanced combustion modeling environment; adaptive methodologies for computational fluid dynamics; lattice simulations of quantum chromodynamics; simulating high-intensity charged-particle beams for the design of high-power accelerators; electronic structure and phase stability of random alloys

  8. Field bus technology in accelerator control systems

    International Nuclear Information System (INIS)

    Tang Shuming

    1999-01-01

    Since eighties to now, the computer technology, network communication and ULSI technology have been developing rapidly. The level of control for industries and scientific experiments has been upgraded accordingly, so as to meet the increasing requirements for automation. The control systems become more complicated; the devices in control systems become more and more intelligent. However the cost of DCS (Distributed Control System) is quite expensive and the period of system integration is very long. More than ten measurement results for two methods defined in the world, in order to get inter operability of intelligent devices and reduce the costs. The author presents the development trend of fieldbuses briefly and describes the main performances of CAN, LONWORKS, WOLDFIP and PROFIBUS which are mainly used in the world today. The author proposes that the field bus technology will be introduced into the accelerator control systems in the country

  9. Spacetime transformations from a uniformly accelerated frame

    International Nuclear Information System (INIS)

    Friedman, Yaakov; Scarr, Tzvi

    2013-01-01

    We use the generalized Fermi–Walker transport to construct a one-parameter family of inertial frames which are instantaneously comoving to a uniformly accelerated observer. We explain the connection between our approach and that of Mashhoon. We show that our solutions of uniformly accelerated motion have constant acceleration in the comoving frame. Assuming the weak hypothesis of locality, we obtain local spacetime transformations from a uniformly accelerated frame K′ to an inertial frame K. The spacetime transformations between two uniformly accelerated frames with the same acceleration are Lorentz. We compute the metric at an arbitrary point of a uniformly accelerated frame. (paper)

  10. Impact of configuration management system of computer center on support of scientific projects throughout their lifecycle

    Science.gov (United States)

    Bogdanov, A. V.; Iuzhanin, N. V.; Zolotarev, V. I.; Ezhakova, T. R.

    2017-12-01

    In this article the problem of scientific projects support throughout their lifecycle in the computer center is considered in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of a computer center. In view of strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of research projects support, the influence of the Configuration Management system is being reviewed and development of the corresponding elements of the system is being described in the present paper.

  11. Some trends in the develpment of electron accelerators

    International Nuclear Information System (INIS)

    Petrov, I.I.

    1976-01-01

    Trends of further development of electron linear accelerators, betatrons and microtrons based upon the first stage of theoretic-information forecasting technique as well as upon the first stage of regressive analysis have been considered. An analysis of dynamics of scientific and technical publications and patenting dynamics has shown that electron linear accelerators are the most promising whereas further elaboration of betatrons is considered fruitless

  12. Developments on the RF system for the Fusion Materials Irradiation Test Facility accelerator

    International Nuclear Information System (INIS)

    Fazio, M.V.; Johnson, H.P.; Riggin, D.M.

    1979-01-01

    The rf system for the Fusion Materials Irradiation Test (FMIT) accelerator is currently in the design phase at the Los Alamos Scientific Laboratory (LASL). The 35-MeV, 100-mA deuteron beam will require approximately 6 MW of rf power at 80 MHz. The EIMAC 8973 power tetrode, capable of a 600-kW cw output, has been chosen as the final amplifier tube for each of 15 amplifier chains. The final power stage of each chain is designed to perform as a linear Class B amplifier. Each low-power rf system (less than or equal to 100W) is to be phase, amplitude, and frequency controlled to provide a drive signal for each high-power amplifier. Beam dynamics for particle acceleration and for minimal beam spill require each rf amplifier output to be phase controlled to +-1 0 . The amplitude of the accelerating field must be held to +-1%. A varactor-tuned electronic phase shifter and a linear phase detector are under development for use in this system. To complement hardware development, analog computer simulations are being performed to optimize the closed-loop control characteristics of the system

  13. Center for Center for Technology for Advanced Scientific Component Software (TASCS)

    Energy Technology Data Exchange (ETDEWEB)

    Kostadin, Damevski [Virginia State Univ., Petersburg, VA (United States)

    2015-01-25

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.

  14. Physics, Computer Science and Mathematics Division annual report, 1 January--31 December 1975

    International Nuclear Information System (INIS)

    Lepore, J.L.

    1975-01-01

    This annual report describes the scientific research and other work carried out during the calendar year 1975. The report is nontechnical in nature, with almost no data. A 17-page bibliography lists the technical papers which detail the work. The contents of the report include the following: experimental physics (high-energy physics--SPEAR, PEP, SLAC, FNAL, BNL, Bevatron; particle data group; medium-energy physics; astrophysics, astronomy, and cosmic rays; instrumentation development), theoretical physics (particle theory and accelerator theory and design), computer science and applied mathematics (data management systems, socio-economic environment demographic information system, computer graphics, computer networks, management information systems, computational physics and data analysis, mathematical modeling, programing languages, applied mathematics research), real-time systems (ModComp and PDP networks), and computer center activities (systems programing, user services, hardware development, computer operations). A glossary of computer science and mathematics terms is also included. 32 figures

  15. GPU acceleration of Monte Carlo simulations for polarized photon scattering in anisotropic turbid media.

    Science.gov (United States)

    Li, Pengcheng; Liu, Celong; Li, Xianpeng; He, Honghui; Ma, Hui

    2016-09-20

    In earlier studies, we developed scattering models and the corresponding CPU-based Monte Carlo simulation programs to study the behavior of polarized photons as they propagate through complex biological tissues. Studying the simulation results in high degrees of freedom that created a demand for massive simulation tasks. In this paper, we report a parallel implementation of the simulation program based on the compute unified device architecture running on a graphics processing unit (GPU). Different schemes for sphere-only simulations and sphere-cylinder mixture simulations were developed. Diverse optimizing methods were employed to achieve the best acceleration. The final-version GPU program is hundreds of times faster than the CPU version. Dependence of the performance on input parameters and precision were also studied. It is shown that using single precision in the GPU simulations results in very limited losses in accuracy. Consumer-level graphics cards, even those in laptop computers, are more cost-effective than scientific graphics cards for single-precision computation.

  16. Final Report to the Department of Energy on the 1994 International Accelerator School: Frontiers of Accelerator Technology

    International Nuclear Information System (INIS)

    Harris, F.A.

    1998-01-01

    The international accelerator school on Frontiers of Accelerator Technology was organized jointly by the US Particle Accelerator School (Dr. Mel Month and Ms. Marilyn Paul), the CERN Accelerator School, and the KEK Accelerator School, and was hosted by the University of Hawaii. The course was held on Maui, Hawaii, November 3-9, 1994 and was made possible in part by a grant from the Department of Energy under award number DE-FG03-94ER40875, AMDT M006. The 1994 program was preceded by similar joint efforts held at Santa Margherita di Pula, Sardinia in February 1985, South Padre Island, Texas in October 1986, Anacapri, Italy in October 1988, Hilton Head Island, South Carolina in October 1990, and Benalmedena, Spain in October/November 1992. The most recent program was held in Montreux, Switzerland in May 1998. The purpose of the program is to disseminate knowledge on the latest ideas and developments in the technology of particle accelerators by bringing together known world experts and younger scientists in the field. It is intended for individuals with professional interest in accelerator physics and technology, for graduate students, for post-docs, for those interested in accelerator based sciences, and for scientific and engineering staff at industrial firms, especially those companies specializing in accelerator components

  17. Advances and challenges in computational plasma science

    International Nuclear Information System (INIS)

    Tang, W M; Chan, V S

    2005-01-01

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This

  18. Scientific Grand Challenges: Discovery In Basic Energy Sciences: The Role of Computing at the Extreme Scale - August 13-15, 2009, Washington, D.C.

    Energy Technology Data Exchange (ETDEWEB)

    Galli, Giulia [Univ. of California, Davis, CA (United States). Workshop Chair; Dunning, Thom [Univ. of Illinois, Urbana, IL (United States). Workshop Chair

    2009-08-13

    The U.S. Department of Energy’s (DOE) Office of Basic Energy Sciences (BES) and Office of Advanced Scientific Computing Research (ASCR) workshop in August 2009 on extreme-scale computing provided a forum for more than 130 researchers to explore the needs and opportunities that will arise due to expected dramatic advances in computing power over the next decade. This scientific community firmly believes that the development of advanced theoretical tools within chemistry, physics, and materials science—combined with the development of efficient computational techniques and algorithms—has the potential to revolutionize the discovery process for materials and molecules with desirable properties. Doing so is necessary to meet the energy and environmental challenges of the 21st century as described in various DOE BES Basic Research Needs reports. Furthermore, computational modeling and simulation are a crucial complement to experimental studies, particularly when quantum mechanical processes controlling energy production, transformations, and storage are not directly observable and/or controllable. Many processes related to the Earth’s climate and subsurface need better modeling capabilities at the molecular level, which will be enabled by extreme-scale computing.

  19. Accelerating Corporate Research in the Development, Application and Deployment of Human Language Technologies

    National Research Council Canada - National Science Library

    Ferrucci, David; Lally, Adam

    2003-01-01

    ... accelerate scientific advance. Furthermore, the ability to reuse and combine results through a common architecture and a robust software framework would accelerate the transfer of research results in HLT into IBM's product platforms...

  20. The Los Alamos accelerator code group

    International Nuclear Information System (INIS)

    Krawczyk, F.L.; Billen, J.H.; Ryne, R.D.; Takeda, Harunori; Young, L.M.

    1995-01-01

    The Los Alamos Accelerator Code Group (LAACG) is a national resource for members of the accelerator community who use and/or develop software for the design and analysis of particle accelerators, beam transport systems, light sources, storage rings, and components of these systems. Below the authors describe the LAACG's activities in high performance computing, maintenance and enhancement of POISSON/SUPERFISH and related codes and the dissemination of information on the INTERNET

  1. Towards C++ object libraries for accelerator physics

    International Nuclear Information System (INIS)

    Michelotti, L.

    1992-01-01

    This paper concerns creation of libraries of reusable objects in the language C ++ for doing accelerator design and analysis. The C ++ language possesses features which lend themselves to writing portable, scientific solftware. The two libraries of C ++ classes (objects) which have been under development are (1) MXYZPTLK, which implements automatic differentiation and (2) BEAMLINE, which provides objects for modeling beam line and accelerator components. A description of the principle classes in these two libraries is presented

  2. Toward executable scientific publications

    NARCIS (Netherlands)

    Strijkers, R.J.; Cushing, R.; Vasyunin, D.; Laat, C. de; Belloum, A.S.Z.; Meijer, R.J.

    2011-01-01

    Reproducibility of experiments is considered as one of the main principles of the scientific method. Recent developments in data and computation intensive science, i.e. e-Science, and state of the art in Cloud computing provide the necessary components to preserve data sets and re-run code and

  3. Technical Challenges and Scientific Payoffs of Muon BeamAccelerators for Particle Physics

    Energy Technology Data Exchange (ETDEWEB)

    Zisman, Michael S.

    2007-09-25

    Historically, progress in particle physics has largely beendetermined by development of more capable particle accelerators. Thistrend continues today with the recent advent of high-luminosityelectron-positron colliders at KEK and SLAC operating as "B factories,"the imminent commissioning of the Large Hadron Collider at CERN, and theworldwide development effort toward the International Linear Collider.Looking to the future, one of the most promising approaches is thedevelopment of muon-beam accelerators. Such machines have very highscientific potential, and would substantially advance thestate-of-the-art in accelerator design. A 20-50 GeV muon storage ringcould serve as a copious source of well-characterized electron neutrinosor antineutrinos (a Neutrino Factory), providing beams aimed at detectorslocated 3000-7500 km from the ring. Such long baseline experiments areexpected to be able to observe and characterize the phenomenon ofcharge-conjugation-parity (CP) violation in the lepton sector, and thusprovide an answer to one of the most fundamental questions in science,namely, why the matter-dominated universe in which we reside exists atall. By accelerating muons to even higher energies of several TeV, we canenvision a Muon Collider. In contrast with composite particles likeprotons, muons are point particles. This means that the full collisionenergy is available to create new particles. A Muon Collider has roughlyten times the energy reach of a proton collider at the same collisionenergy, and has a much smaller footprint. Indeed, an energy frontier MuonCollider could fit on the site of an existing laboratory, such asFermilab or BNL. The challenges of muon-beam accelerators are related tothe facts that i) muons are produced as a tertiary beam, with very large6D phase space, and ii) muons are unstable, with a lifetime at rest ofonly 2 microseconds. How these challenges are accommodated in theaccelerator design will be described. Both a Neutrino Factory and a Muon

  4. Swarm accelerometer data processing from raw accelerations to thermospheric neutral densities

    DEFF Research Database (Denmark)

    Siemes, Christian; da Encarnacao, Joao de Teixeira; Doornbos, Eelco

    2016-01-01

    The Swarm satellites were launched on November 22, 2013, and carry accelerometers and GPS receivers as part of their scientific payload. The GPS receivers do not only provide the position and time for the magnetic field measurements, but are also used for determining non-gravitational forces like...... in the acceleration measurements of Swarm B. We show the results of each processing stage, highlight the difficulties encountered, and comment on the quality of the thermospheric neutral density data set......., the most prominent being slow temperature-induced bias variations and sudden bias changes. In this paper, we describe the new, improved four-stage processing that is applied for transforming the disturbed acceleration measurements into scientifically valuable thermospheric neutral densities. In the first...... stage, the sudden bias changes in the acceleration measurements are manually removed using a dedicated software tool. The second stage is the calibration of the accelerometer measurements against the non-gravitational accelerations derived from the GPS receiver, which includes the correction...

  5. Summary talk - an accelerator for other projects

    International Nuclear Information System (INIS)

    Laclare, J.L.

    2001-01-01

    Over the past few years, great consideration was given to the feasibility of a new generation of high intensity proton linear accelerators capable of delivering several tens of MW of beam power. Many scientific applications could benefit from such a development: 1) hybrid reactors and transmutation of nuclear waste, 2) muon and neutrino factories, 3) irradiation tools, 4) spallation neutron source for material studies, 5) radioactive nuclear beams, and 6) radioisotopes. Deciding on priorities is more difficult than in the past and competition is extremely strong. A possible solution could be to look for possible synergies between these projects and to develop multi-purpose facilities whenever the applications are compatible, so as to maximize scientific outcome while minimising costs. This paper intends to identify large similarities in terms of accelerator requirements and potential synergies for the 6 applications listed above

  6. Accelerated time-of-flight (TOF) PET image reconstruction using TOF bin subsetization and TOF weighting matrix pre-computation

    International Nuclear Information System (INIS)

    Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib

    2016-01-01

    FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction. (paper)

  7. rf quadrupole linac: a new low-energy accelerator

    International Nuclear Information System (INIS)

    Hamm, R.W.; Crandall, K.R.; Fuller, C.W.

    1980-01-01

    A new concept in low-energy particle accelerators, the radio-frequency quadrupole (RFQ) linac, is currently being developed at the Los Alamos National Scientific Laboratory. In this new linear accelerating structure both the focusing and accelerating forces are produced by the rf fields. It can accept a high-current, low-velocity dc ion beam and bunch it with a high capture efficiency. The performance of this structure as a low-energy linear accelerator has been verified with the successful construction of a proton RFQ linac. This test structure has accelerated 38 mA of protons from 100 keV to 640 keV in 1.1 meters with a capture efficiency greater than 80%. In this paper a general description of the RFQ linac and an outline of the basic RFQ linac design procedure are presented in addition to the experimental results from the test accelerator. Finally, several applications of this new accelerator are discussed

  8. 3rd International Conference on Particle Physics Beyond the Standard Model : Accelerator, Non-Accelerator and Space Approaches

    CERN Document Server

    Beyond The Desert 2002

    2003-01-01

    The third conference on particle physics beyond the Standard Model (BEYOND THE DESERT'02 - Accelerator, Non-accelerator and Space Approaches) was held during 2--7 June, 2002 at the Finish town of Oulu, almost at the northern Arctic Circle. It was the first of the BEYOND conference series held outside Germany (CERN Courier March 2003, pp. 29-30). Traditionally the Scientific Programme of BEYOND conferences, brought into life in 1997 (see CERN Courier, November 1997, pp.16-18), covers almost all topics of modern particle physics (see contents).

  9. The Los Alamos accelerator code group

    Energy Technology Data Exchange (ETDEWEB)

    Krawczyk, F.L.; Billen, J.H.; Ryne, R.D.; Takeda, Harunori; Young, L.M.

    1995-05-01

    The Los Alamos Accelerator Code Group (LAACG) is a national resource for members of the accelerator community who use and/or develop software for the design and analysis of particle accelerators, beam transport systems, light sources, storage rings, and components of these systems. Below the authors describe the LAACG`s activities in high performance computing, maintenance and enhancement of POISSON/SUPERFISH and related codes and the dissemination of information on the INTERNET.

  10. Physics, Computer Science and Mathematics Division annual report, 1 January--31 December 1975. [LBL

    Energy Technology Data Exchange (ETDEWEB)

    Lepore, J.L. (ed.)

    1975-01-01

    This annual report describes the scientific research and other work carried out during the calendar year 1975. The report is nontechnical in nature, with almost no data. A 17-page bibliography lists the technical papers which detail the work. The contents of the report include the following: experimental physics (high-energy physics--SPEAR, PEP, SLAC, FNAL, BNL, Bevatron; particle data group; medium-energy physics; astrophysics, astronomy, and cosmic rays; instrumentation development), theoretical physics (particle theory and accelerator theory and design), computer science and applied mathematics (data management systems, socio-economic environment demographic information system, computer graphics, computer networks, management information systems, computational physics and data analysis, mathematical modeling, programing languages, applied mathematics research), real-time systems (ModComp and PDP networks), and computer center activities (systems programing, user services, hardware development, computer operations). A glossary of computer science and mathematics terms is also included. 32 figures. (RWR)

  11. Modern control techniques for accelerators

    International Nuclear Information System (INIS)

    Goodwin, R.W.; Shea, M.F.

    1984-01-01

    Beginning in the mid to late sixties, most new accelerators were designed to include computer based control systems. Although each installation differed in detail, the technology of the sixties and early to mid seventies dictated an architecture that was essentially the same for the control systems of that era. A mini-computer was connected to the hardware and to a console. Two developments have changed the architecture of modern systems: the microprocessor and local area networks. This paper discusses these two developments and demonstrates their impact on control system design and implementation by way of describing a possible architecture for any size of accelerator. Both hardware and software aspects are included

  12. Final Report to the Department of Energy on the 1994 International Accelerator School: Frontiers of Accelerator Technology; FINAL

    International Nuclear Information System (INIS)

    Harris, F.A.

    1998-01-01

    The international accelerator school on Frontiers of Accelerator Technology was organized jointly by the US Particle Accelerator School (Dr. Mel Month and Ms. Marilyn Paul), the CERN Accelerator School, and the KEK Accelerator School, and was hosted by the University of Hawaii. The course was held on Maui, Hawaii, November 3-9, 1994 and was made possible in part by a grant from the Department of Energy under award number DE-FG03-94ER40875, AMDT M006. The 1994 program was preceded by similar joint efforts held at Santa Margherita di Pula, Sardinia in February 1985, South Padre Island, Texas in October 1986, Anacapri, Italy in October 1988, Hilton Head Island, South Carolina in October 1990, and Benalmedena, Spain in October/November 1992. The most recent program was held in Montreux, Switzerland in May 1998. The purpose of the program is to disseminate knowledge on the latest ideas and developments in the technology of particle accelerators by bringing together known world experts and younger scientists in the field. It is intended for individuals with professional interest in accelerator physics and technology, for graduate students, for post-docs, for those interested in accelerator based sciences, and for scientific and engineering staff at industrial firms, especially those companies specializing in accelerator components

  13. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  14. Diagnosis of Acceleration, Reconnection, Turbulence, and Heating

    Science.gov (United States)

    Dufor, Mikal T.; Jemiolo, Andrew J.; Keesee, Amy; Cassak, Paul; Tu, Weichao; Scime, Earl E.

    2017-10-01

    The DARTH (Diagnosis of Acceleration, Reconnection, Turbulence, and Heating) experiment is an intermediate-scale, experimental facility designed to study magnetic reconnection at and below the kinetic scale of ions and electrons. The experiment will have non-perturbative diagnostics with high temporal and three-dimensional spatial resolution, giving it the capability to investigate kinetic-scale physics. Of specific scientific interest are particle acceleration, plasma heating, turbulence and energy dissipation during reconnection. Here we will describe the magnetic field system and the two plasma guns used to create flux ropes that then merge through magnetic reconnection. We will also describe the key diagnostic systems: laser induced fluorescence (LIF) for ion vdf measurements, a 300 GHz microwave scattering system for sub-mm wavelength fluctuation measurements and a Thomson scattering laser for electron vdf measurements. The vacuum chamber is designed to provide unparalleled access for these particle diagnostics. The scientific goals of DARTH are to examine particle acceleration and heating during, the role of three-dimensional instabilities during reconnection, how reconnection ceases, and the role of impurities and asymmetries in reconnection. This work was supported by the by the O'Brien Energy Research Fund.

  15. Automated Software Acceleration in Programmable Logic for an Efficient NFFT Algorithm Implementation: A Case Study.

    Science.gov (United States)

    Rodríguez, Manuel; Magdaleno, Eduardo; Pérez, Fernando; García, Cristhian

    2017-03-28

    Non-equispaced Fast Fourier transform (NFFT) is a very important algorithm in several technological and scientific areas such as synthetic aperture radar, computational photography, medical imaging, telecommunications, seismic analysis and so on. However, its computation complexity is high. In this paper, we describe an efficient NFFT implementation with a hardware coprocessor using an All-Programmable System-on-Chip (APSoC). This is a hybrid device that employs an Advanced RISC Machine (ARM) as Processing System with Programmable Logic for high-performance digital signal processing through parallelism and pipeline techniques. The algorithm has been coded in C language with pragma directives to optimize the architecture of the system. We have used the very novel Software Develop System-on-Chip (SDSoC) evelopment tool that simplifies the interface and partitioning between hardware and software. This provides shorter development cycles and iterative improvements by exploring several architectures of the global system. The computational results shows that hardware acceleration significantly outperformed the software based implementation.

  16. Feature-Based Analysis of Plasma-Based Particle Acceleration Data

    Energy Technology Data Exchange (ETDEWEB)

    Rubel, Oliver [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Geddes, Cameron G. R. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chen, Min [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cormier-Michel, Estelle [Tech-X Corp., Boulder, CO (United States); Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-02-01

    Plasma-based particle accelerators can produce and sustain thousands of times stronger acceleration fields than conventional particle accelerators, providing a potential solution to the problem of the growing size and cost of conventional particle accelerators. To facilitate scientific knowledge discovery from the ever growing collections of accelerator simulation data generated by accelerator physicists to investigate next-generation plasma-based particle accelerator designs, we describe a novel approach for automatic detection and classification of particle beams and beam substructures due to temporal differences in the acceleration process, here called acceleration features. The automatic feature detection in combination with a novel visualization tool for fast, intuitive, query-based exploration of acceleration features enables an effective top-down data exploration process, starting from a high-level, feature-based view down to the level of individual particles. We describe the application of our analysis in practice to analyze simulations of single pulse and dual and triple colliding pulse accelerator designs, and to study the formation and evolution of particle beams, to compare substructures of a beam and to investigate transverse particle loss.

  17. Accelerated Adaptive MGS Phase Retrieval

    Science.gov (United States)

    Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang

    2011-01-01

    The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.

  18. Report for the Office of Scientific and Technical Information: Population Modeling of the Emergence and Development of Scientific Fields

    Energy Technology Data Exchange (ETDEWEB)

    Bettencourt, L. M. A. (LANL); Castillo-Chavez, C. (Arizona State University); Kaiser, D. (MIT); Wojick, D. E. (IIA)

    2006-10-04

    The accelerated development of digital libraries and archives, in tandem with efficient search engines and the computational ability to retrieve and parse massive amounts of information, are making it possible to quantify the time evolution of scientific literatures. These data are but one piece of the tangible recorded evidence of the processes whereby scientists create and exchange information in their journeys towards the generation of knowledge. As such, these tools provide a proxy with which to study our ability to innovate. Innovation has often been linked with prosperity and growth and, consequently, trying to understand what drives scientific innovation is of extreme interest. Identifying sets of population characteristics, factors, and mechanisms that enable scientific communities to remain at the cutting edge, accelerate their growth, or increase their ability to re-organize around new themes or research topics is therefore of special significance. Yet generating a quantitative understanding of the factors that make scientific fields arise and/or become more or less productive is still in its infancy. This is precisely the type of knowledge most needed for promoting and sustaining innovation. Ideally, the efficient and strategic allocation of resources on the part of funding agencies and corporations would be driven primarily by knowledge of this type. Early steps have been taken toward such a quantitative understanding of scientific innovation. Some have focused on characterizing the broad properties of relevant time series, such as numbers of publications and authors in a given field. Others have focused on the structure and evolution of networks of coauthorship and citation. Together these types of studies provide much needed statistical analyses of the structure and evolution of scientific communities. Despite these efforts, however, crucial elements of prediction have remained elusive. Building on many of these earlier insights, we provide here a

  19. Research and simulation of intense pulsed beam transfer in electrostatic accelerate tube

    International Nuclear Information System (INIS)

    Li Chaolong; Shi Haiquan; Lu Jianqin

    2012-01-01

    To study intense pulsed beam transfer in electrostatic accelerate tube, the matrix method was applied to analyze the transport matrixes in electrostatic accelerate tube of non-intense pulsed beam and intense pulsed beam, and a computer code was written for the intense pulsed beam transporting in electrostatic accelerate tube. Optimization techniques were used to attain the given optical conditions and iteration procedures were adopted to compute intense pulsed beam for obtaining self-consistent solutions in this computer code. The calculations were carried out by using ACCT, TRACE-3D and TRANSPORT for different beam currents, respectively. The simulation results show that improvement of the accelerating voltage ratio can enhance focusing power of electrostatic accelerate tube, reduce beam loss and increase the transferring efficiency. (authors)

  20. Simulating Hydrologic Flow and Reactive Transport with PFLOTRAN and PETSc on Emerging Fine-Grained Parallel Computer Architectures

    Science.gov (United States)

    Mills, R. T.; Rupp, K.; Smith, B. F.; Brown, J.; Knepley, M.; Zhang, H.; Adams, M.; Hammond, G. E.

    2017-12-01

    As the high-performance computing community pushes towards the exascale horizon, power and heat considerations have driven the increasing importance and prevalence of fine-grained parallelism in new computer architectures. High-performance computing centers have become increasingly reliant on GPGPU accelerators and "manycore" processors such as the Intel Xeon Phi line, and 512-bit SIMD registers have even been introduced in the latest generation of Intel's mainstream Xeon server processors. The high degree of fine-grained parallelism and more complicated memory hierarchy considerations of such "manycore" processors present several challenges to existing scientific software. Here, we consider how the massively parallel, open-source hydrologic flow and reactive transport code PFLOTRAN - and the underlying Portable, Extensible Toolkit for Scientific Computation (PETSc) library on which it is built - can best take advantage of such architectures. We will discuss some key features of these novel architectures and our code optimizations and algorithmic developments targeted at them, and present experiences drawn from working with a wide range of PFLOTRAN benchmark problems on these architectures.

  1. Accelerated Synchrotron X-ray Diffraction Data Analysis on a Heterogeneous High Performance Computing System

    Energy Technology Data Exchange (ETDEWEB)

    Qin, J; Bauer, M A, E-mail: qin.jinhui@gmail.com, E-mail: bauer@uwo.ca [Computer Science Department, University of Western Ontario, London, ON N6A 5B7 (Canada)

    2010-11-01

    The analysis of synchrotron X-ray Diffraction (XRD) data has been used by scientists and engineers to understand and predict properties of materials. However, the large volume of XRD image data and the intensive computations involved in the data analysis makes it hard for researchers to quickly reach any conclusions about the images from an experiment when using conventional XRD data analysis software. Synchrotron time is valuable and delays in XRD data analysis can impact decisions about subsequent experiments or about materials that they are investigating. In order to improve the data analysis performance, ideally to achieve near real time data analysis during an XRD experiment, we designed and implemented software for accelerated XRD data analysis. The software has been developed for a heterogeneous high performance computing (HPC) system, comprised of IBM PowerXCell 8i processors and Intel quad-core Xeon processors. This paper describes the software and reports on the improved performance. The results indicate that it is possible for XRD data to be analyzed at the rate it is being produced.

  2. Accelerated Synchrotron X-ray Diffraction Data Analysis on a Heterogeneous High Performance Computing System

    International Nuclear Information System (INIS)

    Qin, J; Bauer, M A

    2010-01-01

    The analysis of synchrotron X-ray Diffraction (XRD) data has been used by scientists and engineers to understand and predict properties of materials. However, the large volume of XRD image data and the intensive computations involved in the data analysis makes it hard for researchers to quickly reach any conclusions about the images from an experiment when using conventional XRD data analysis software. Synchrotron time is valuable and delays in XRD data analysis can impact decisions about subsequent experiments or about materials that they are investigating. In order to improve the data analysis performance, ideally to achieve near real time data analysis during an XRD experiment, we designed and implemented software for accelerated XRD data analysis. The software has been developed for a heterogeneous high performance computing (HPC) system, comprised of IBM PowerXCell 8i processors and Intel quad-core Xeon processors. This paper describes the software and reports on the improved performance. The results indicate that it is possible for XRD data to be analyzed at the rate it is being produced.

  3. GPU in Physics Computation: Case Geant4 Navigation

    CERN Document Server

    Seiskari, Otto; Niemi, Tapio

    2012-01-01

    General purpose computing on graphic processing units (GPU) is a potential method of speeding up scientific computation with low cost and high energy efficiency. We experimented with the particle physics simulation toolkit Geant4 used at CERN to benchmark its geometry navigation functionality on a GPU. The goal was to find out whether Geant4 physics simulations could benefit from GPU acceleration and how difficult it is to modify Geant4 code to run in a GPU. We ported selected parts of Geant4 code to C99 & CUDA and implemented a simple gamma physics simulation utilizing this code to measure efficiency. The performance of the program was tested by running it on two different platforms: NVIDIA GeForce 470 GTX GPU and a 12-core AMD CPU system. Our conclusion was that GPUs can be a competitive alternate for multi-core computers but porting existing software in an efficient way is challenging.

  4. Collaboration tools for the global accelerator network Workshop Report

    CERN Document Server

    Agarwal, D; Olson, J

    2002-01-01

    The concept of a ''Global Accelerator Network'' (GAN) has been put forward as a means for inter-regional collaboration in the operation of internationally constructed and operated frontier accelerator facilities. A workshop was held to allow representatives of the accelerator community and of the collaboratory development community to meet and discuss collaboration tools for the GAN environment. This workshop, called the Collaboration Tools for the Global Accelerator Network (GAN) Workshop, was held on August 26, 2002 at Lawrence Berkeley National Laboratory. The goal was to provide input about collaboration tools in general and to provide a strawman for the GAN collaborative tools environment. The participants at the workshop represented accelerator physicists, high-energy physicists, operations, technology tool developers, and social scientists that study scientific collaboration.

  5. Collaboration tools for the global accelerator network: Workshop Report

    International Nuclear Information System (INIS)

    Agarwal, Deborah; Olson, Gary; Olson, Judy

    2002-01-01

    The concept of a ''Global Accelerator Network'' (GAN) has been put forward as a means for inter-regional collaboration in the operation of internationally constructed and operated frontier accelerator facilities. A workshop was held to allow representatives of the accelerator community and of the collaboratory development community to meet and discuss collaboration tools for the GAN environment. This workshop, called the Collaboration Tools for the Global Accelerator Network (GAN) Workshop, was held on August 26, 2002 at Lawrence Berkeley National Laboratory. The goal was to provide input about collaboration tools in general and to provide a strawman for the GAN collaborative tools environment. The participants at the workshop represented accelerator physicists, high-energy physicists, operations, technology tool developers, and social scientists that study scientific collaboration

  6. Computer codes for particle accelerator design and analysis: A compendium. Second edition

    International Nuclear Information System (INIS)

    Deaven, H.S.; Chan, K.C.D.

    1990-05-01

    The design of the next generation of high-energy accelerators will probably be done as an international collaborative efforts and it would make sense to establish, either formally or informally, an international center for accelerator codes with branches for maintenance, distribution, and consultation at strategically located accelerator centers around the world. This arrangement could have at least three beneficial effects. It would cut down duplication of effort, provide long-term support for the best codes, and provide a stimulating atmosphere for the evolution of new codes. It does not take much foresight to see that the natural evolution of accelerator design codes is toward the development of so-called Expert Systems, systems capable of taking design specifications of future accelerators and producing specifications for optimized magnetic transport and acceleration components, making a layout, and giving a fairly impartial cost estimate. Such an expert program would use present-day programs such as TRANSPORT, POISSON, and SUPERFISH as tools in the optimization process. Such a program would also serve to codify the experience of two generations of accelerator designers before it is lost as these designers reach retirement age. This document describes 203 codes that originate from 10 countries and are currently in use. The authors feel that this compendium will contribute to the dialogue supporting the international collaborative effort that is taking place in the field of accelerator physics today

  7. A theoretical investigation of the collective acceleration of cluster ions with accelerated potential waves

    International Nuclear Information System (INIS)

    Suzuki, Hiroshi; Enjoji, Hiroshi; Kawaguchi, Motoichi; Noritake, Toshiya

    1984-01-01

    A theoretical treatment of the acceleration of cluster ions for additional heating of fusion plasma using the trapping effect in an accelerated potential wave is described. The conceptual design of the accelerator is the same as that by Enjoji, and the potential wave used is sinusoidal. For simplicity, collisions among cluster ions and the resulting breakups are neglected. The masses of the cluster ions are specified to range from 100 m sub(D) to 1000 m sub(D) (m sub(D): mass of a deuterium atom). Theoretical treatment is carried out only for the injection velocity which coincides with the phase velocity of the applied wave at the entrance of the accelerator. An equation describing the rate for successful acceleration of ions with a certain mass is deduced for the continuous injection of cluster ions. Computation for a typical mass distribution shows that more than 70% of the injected particles are effectively accelerated. (author)

  8. Simulation of accelerator transmutation of long-lived nuclear wastes; Simulation de transmutation de dechets nucleaires a vie longue par accelerateur

    Energy Technology Data Exchange (ETDEWEB)

    Fabienne, Wolff-Bacha [Paris-11 Univ., 91 - Orsay (France)

    1997-07-09

    The incineration of minor actinides with a hybrid reactor (i.e. coupled with an accelerator) could reduce their radioactivity. The scientific tool used for simulations, the GEANT code implemented on a paralleled computer, has been confirmed initially on thin and thick targets and by simulation of a pressurized water reactor, a fast reactor like Superphenix, and a molten salt fast hybrid reactor `ATP`. Simulating a thermal hybrid reactor seems to indicate the non-negligible presence of neutrons which diffuse back to the accelerator. In spite of simplifications, the simulation of a molten lead fast hybrid reactor (as the CERN Fast Energy Amplifier) might indicate difficulties in the radial power distribution in the core, the life time of the window and the activated air leak risk. Finally, we propose a thermoelectric compact hybrid reactor, PRAHE - small atomic board hybrid reactor - the principle of which allows a neutron coupling between the accelerator and the reactor. (author) 270 refs., 91 figs., 31 tabs.

  9. Neural chips, neural computers and application in high and superhigh energy physics experiments

    International Nuclear Information System (INIS)

    Nikityuk, N.M.; )

    2001-01-01

    Architecture peculiarity and characteristics of series of neural chips and neural computes used in scientific instruments are considered. Tendency of development and use of them in high energy and superhigh energy physics experiments are described. Comparative data which characterize the efficient use of neural chips for useful event selection, classification elementary particles, reconstruction of tracks of charged particles and for search of hypothesis Higgs particles are given. The characteristics of native neural chips and accelerated neural boards are considered [ru

  10. The control computer for the Chalk River electron test accelerator

    International Nuclear Information System (INIS)

    McMichael, G.E.; Fraser, J.S.; McKeown, J.

    1978-02-01

    A versatile control and data acquisition system has been developed for a modest-sized linear accelerator using mainly process I/O hardware and software. This report describes the evolution of the present system since 1972, the modifications needed to satisfy the changing requirements of the various accelerator physics experiments and the limitations of such a system in process control. (author)

  11. Paul Scherrer Institute Scientific and Technical Report 2000. Volume VI: Large Research Facilities

    International Nuclear Information System (INIS)

    Foroughi, Fereydoun; Bercher, Renate; Buechli, Carmen; Zumkeller, Lotty

    2001-01-01

    The PSI Department Large Research Facilities (GFA) joins the efforts to provide an excellent research environment to Swiss and foreign research groups on the experimental facilities driven by our high intensity proton accelerator complex. Its divisions care for the running, maintenance and enhancement of the accelerator complex, the primary proton beamlines, the targets and the secondary beams as well as the neutron spallation source SINQ. The division for technical support and coordination provides for technical support to the research facility complementary to the basic logistic available from the department for logistics and marketing. Besides running the facilities, the staff of the department is also involved in theoretical and experimental research projects. Some of them address basic scientific questions mainly concerning the properties of micro- or nanostructured materials: experiments as well as large scale computer simulations of molecular dynamics were performed to investigate nonclassical materials properties. Others are related to improvements or extensions of the capabilities of our facilities. We also report on intriguing results from applications of the neutron capture radiography, the prompt gamma activation method and the isotope production facility at SINQ

  12. Paul Scherrer Institute Scientific and Technical Report 2000. Volume VI: Large Research Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Foroughi, Fereydoun; Bercher, Renate; Buechli, Carmen; Zumkeller, Lotty [eds.

    2001-07-01

    The PSI Department Large Research Facilities (GFA) joins the efforts to provide an excellent research environment to Swiss and foreign research groups on the experimental facilities driven by our high intensity proton accelerator complex. Its divisions care for the running, maintenance and enhancement of the accelerator complex, the primary proton beamlines, the targets and the secondary beams as well as the neutron spallation source SINQ. The division for technical support and coordination provides for technical support to the research facility complementary to the basic logistic available from the department for logistics and marketing. Besides running the facilities, the staff of the department is also involved in theoretical and experimental research projects. Some of them address basic scientific questions mainly concerning the properties of micro- or nanostructured materials: experiments as well as large scale computer simulations of molecular dynamics were performed to investigate nonclassical materials properties. Others are related to improvements or extensions of the capabilities of our facilities. We also report on intriguing results from applications of the neutron capture radiography, the prompt gamma activation method and the isotope production facility at SINQ.

  13. Laser-driven acceleration with Bessel beam

    International Nuclear Information System (INIS)

    Imasaki, Kazuo; Li, Dazhi

    2005-01-01

    A new approach of laser-driven acceleration with Bessel beam is described. Bessel beam, in contrast to the Gaussian beam, shows diffraction-free'' characteristics in its propagation, which implies potential in laser-driven acceleration. But a normal laser, even if the Bessel beam, laser can not accelerate charged particle efficiently because the difference of velocity between the particle and photon makes cyclic acceleration and deceleration phase. We proposed a Bessel beam truncated by a set of annular slits those makes several special regions in its travelling path, where the laser field becomes very weak and the accelerated particles are possible to receive no deceleration as they undergo decelerating phase. Thus, multistage acceleration is realizable with high gradient. In a numerical computation, we have shown the potential of multistage acceleration based on a three-stage model. (author)

  14. Scientific Assessment Group for Experiments in Non-Accelerator Physics (SAGENAP)

    Science.gov (United States)

    1997-03-01

    substantial non- baryonic component to dark matter . Dark matter candidates include Weakly Interacting Massive Particles (WTMPs), axions, light...Currently, axions are also of interest as one of the prime candidates for non- baryonic dark matter . Axions with a mass in the region of 10Ś -10ŗ eV...issues in the non-accelerator program, such as the study of solar and atmospheric neutrino oscillations, the pursuit of dark matter , and the study of

  15. Neutron induced activation in the EVEDA accelerator materials: Implications for the accelerator maintenance

    International Nuclear Information System (INIS)

    Sanz, J.; Garcia, M.; Sauvan, P.; Lopez, D.; Moreno, C.; Ibarra, A.; Sedano, L.

    2009-01-01

    The Engineering Validation and Engineering Design Activities (EVEDA) phase of the International Fusion Materials Irradiation Facility project should result in an accelerator prototype for which the analysis of the dose rates evolution during the beam-off phase is a necessary task for radioprotection and maintenance feasibility purposes. Important aspects of the computational methodology to address this problem are discussed, and dose rates for workers inside the accelerator vault are assessed and found to be not negligible.

  16. Neutron induced activation in the EVEDA accelerator materials: Implications for the accelerator maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Sanz, J. [Department of Power Engineering, Universidad Nacional de Educacion a Distancia (UNED), C/Juan del Rosal 12, 28040 Madrid (Spain); Institute of Nuclear Fusion, UPM, 28006 Madrid (Spain)], E-mail: jsanz@ind.uned.es; Garcia, M.; Sauvan, P.; Lopez, D. [Department of Power Engineering, Universidad Nacional de Educacion a Distancia (UNED), C/Juan del Rosal 12, 28040 Madrid (Spain); Institute of Nuclear Fusion, UPM, 28006 Madrid (Spain); Moreno, C.; Ibarra, A.; Sedano, L. [CIEMAT, 28040 Madrid (Spain)

    2009-04-30

    The Engineering Validation and Engineering Design Activities (EVEDA) phase of the International Fusion Materials Irradiation Facility project should result in an accelerator prototype for which the analysis of the dose rates evolution during the beam-off phase is a necessary task for radioprotection and maintenance feasibility purposes. Important aspects of the computational methodology to address this problem are discussed, and dose rates for workers inside the accelerator vault are assessed and found to be not negligible.

  17. The NSC 16 MV tandem accelerator control system

    International Nuclear Information System (INIS)

    Ajith Kumar, B.P.; Kannaiyan, J.; Sugathan, P.; Bhowmik, R.K.

    1994-01-01

    The computerized control system for the 16 MV Pelletron accelerator at the Nuclear Science Centre runs on a PC-AT 386 computer. Devices in the accelerator are interfaced to the computer by using a CAMAC Serial Highway. The software, written in C, is Database oriented and supports many features useful for the accelerator operation. The control console consists of an EGA monitor, keyboard, assignable control knobs and meters, a diagrammatic display showing the overall status of the machine and a similar panel for showing the status of radiation safety interlocks. The system has been operational for the past three years and is discussed below. (orig.)

  18. Accelerating VASP electronic structure calculations using graphic processing units

    KAUST Repository

    Hacene, Mohamed

    2012-08-20

    We present a way to improve the performance of the electronic structure Vienna Ab initio Simulation Package (VASP) program. We show that high-performance computers equipped with graphics processing units (GPUs) as accelerators may reduce drastically the computation time when offloading these sections to the graphic chips. The procedure consists of (i) profiling the performance of the code to isolate the time-consuming parts, (ii) rewriting these so that the algorithms become better-suited for the chosen graphic accelerator, and (iii) optimizing memory traffic between the host computer and the GPU accelerator. We chose to accelerate VASP with NVIDIA GPU using CUDA. We compare the GPU and original versions of VASP by evaluating the Davidson and RMM-DIIS algorithms on chemical systems of up to 1100 atoms. In these tests, the total time is reduced by a factor between 3 and 8 when running on n (CPU core + GPU) compared to n CPU cores only, without any accuracy loss. © 2012 Wiley Periodicals, Inc.

  19. Accelerating VASP electronic structure calculations using graphic processing units

    KAUST Repository

    Hacene, Mohamed; Anciaux-Sedrakian, Ani; Rozanska, Xavier; Klahr, Diego; Guignon, Thomas; Fleurat-Lessard, Paul

    2012-01-01

    We present a way to improve the performance of the electronic structure Vienna Ab initio Simulation Package (VASP) program. We show that high-performance computers equipped with graphics processing units (GPUs) as accelerators may reduce drastically the computation time when offloading these sections to the graphic chips. The procedure consists of (i) profiling the performance of the code to isolate the time-consuming parts, (ii) rewriting these so that the algorithms become better-suited for the chosen graphic accelerator, and (iii) optimizing memory traffic between the host computer and the GPU accelerator. We chose to accelerate VASP with NVIDIA GPU using CUDA. We compare the GPU and original versions of VASP by evaluating the Davidson and RMM-DIIS algorithms on chemical systems of up to 1100 atoms. In these tests, the total time is reduced by a factor between 3 and 8 when running on n (CPU core + GPU) compared to n CPU cores only, without any accuracy loss. © 2012 Wiley Periodicals, Inc.

  20. Vacuum system for Advanced Test Accelerator

    International Nuclear Information System (INIS)

    Denhoy, B.S.

    1981-01-01

    The Advanced Test Accelerator (ATA) is a pulsed linear electron beam accelerator designed to study charged particle beam propagation. ATA is designed to produce a 10,000 amp 50 MeV, 70 ns electron beam. The electron beam acceleration is accomplished in ferrite loaded cells. Each cell is capable of maintaining a 70 ns 250 kV voltage pulse across a 1 inch gap. The electron beam is contained in a 5 inch diameter, 300 foot long tube. Cryopumps turbomolecular pumps, and mechanical pumps are used to maintain a base pressure of 2 x 10 -6 torr in the beam tube. The accelerator will be installed in an underground tunnel. Due to the radiation environment in the tunnel, the controlling and monitoring of the vacuum equipment, pressures and temperatures will be done from the control room through a computer interface. This paper describes the vacuum system design, the type of vacuum pumps specified, the reasons behind the selection of the pumps and the techniques used for computer interfacing