WorldWideScience

Sample records for computational accelerator physics

  1. Advanced Computing Tools and Models for Accelerator Physics

    Energy Technology Data Exchange (ETDEWEB)

    Ryne, Robert; Ryne, Robert D.

    2008-06-11

    This paper is based on a transcript of my EPAC'08 presentation on advanced computing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics.

  2. Lua(Jit) for computing accelerator beam physics

    CERN Document Server

    CERN. Geneva

    2016-01-01

    As mentioned in the 2nd developers meeting, I would like to open the debate with a special presentation on another language - Lua, and a tremendous technology - LuaJit. Lua is much less known at CERN, but it is very simple, much smaller than Python and its JIT is extremely performant. The language is a dynamic scripting language easy to learn and easy to embedded in applications. I will show how we use it in HPC for accelerator beam physics as a replacement for C, C++, Fortran and Python, with some benchmarks versus Python, PyPy4 and C/C++.

  3. VLHC accelerator physics

    Energy Technology Data Exchange (ETDEWEB)

    Michael Blaskiewicz et al.

    2001-11-01

    A six-month design study for a future high energy hadron collider was initiated by the Fermilab director in October 2000. The request was to study a staged approach where a large circumference tunnel is built that initially would house a low field ({approx}2 T) collider with center-of-mass energy greater than 30 TeV and a peak (initial) luminosity of 10{sup 34} cm{sup -2}s{sup -1}. The tunnel was to be scoped, however, to support a future upgrade to a center-of-mass energy greater than 150 TeV with a peak luminosity of 2 x 10{sup 34} cm{sup -2} sec{sup -1} using high field ({approx} 10 T) superconducting magnet technology. In a collaboration with Brookhaven National Laboratory and Lawrence Berkeley National Laboratory, a report of the Design Study was produced by Fermilab in June 2001. 1 The Design Study focused on a Stage 1, 20 x 20 TeV collider using a 2-in-1 transmission line magnet and leads to a Stage 2, 87.5 x 87.5 TeV collider using 10 T Nb{sub 3}Sn magnet technology. The article that follows is a compilation of accelerator physics designs and computational results which contributed to the Design Study. Many of the parameters found in this report evolved during the study, and thus slight differences between this text and the Design Study report can be found. The present text, however, presents the major accelerator physics issues of the Very Large Hadron Collider as examined by the Design Study collaboration and provides a basis for discussion and further studies of VLHC accelerator parameters and design philosophies.

  4. Accelerator and radiation physics

    CERN Document Server

    Basu, Samita; Nandy, Maitreyee

    2013-01-01

    "Accelerator and radiation physics" encompasses radiation shielding design and strategies for hadron therapy accelerators, neutron facilities and laser based accelerators. A fascinating article describes detailed transport theory and its application to radiation transport. Detailed information on planning and design of a very high energy proton accelerator can be obtained from the article on radiological safety of J-PARC. Besides safety for proton accelerators, the book provides information on radiological safety issues for electron synchrotron and prevention and preparedness for radiological emergencies. Different methods for neutron dosimetry including LET based monitoring, time of flight spectrometry, track detectors are documented alongwith newly measured experimental data on radiation interaction with dyes, polymers, bones and other materials. Design of deuteron accelerator, shielding in beam line hutches in synchrotron and 14 MeV neutron generator, various radiation detection methods, their characteriza...

  5. Computational physics

    CERN Document Server

    Newman, Mark

    2013-01-01

    A complete introduction to the field of computational physics, with examples and exercises in the Python programming language. Computers play a central role in virtually every major physics discovery today, from astrophysics and particle physics to biophysics and condensed matter. This book explains the fundamentals of computational physics and describes in simple terms the techniques that every physicist should know, such as finite difference methods, numerical quadrature, and the fast Fourier transform. The book offers a complete introduction to the topic at the undergraduate level, and is also suitable for the advanced student or researcher who wants to learn the foundational elements of this important field.

  6. Particle accelerator physics

    CERN Document Server

    Wiedemann, Helmut

    2007-01-01

    Particle Accelerator Physics is an in-depth and comprehensive introduction to the field of high-energy particle acceleration and beam dynamics. Part I gathers the basic tools, recalling the essentials of electrostatics and electrodynamics as well as of particle dynamics in electromagnetic fields. Part II is an extensive primer in beam dynamics, followed in Part III by the introduction and description of the main beam parameters. Part IV is devoted to the treatment of perturbations in beam dynamics. Part V discusses the details of charged particle accleration. Part VI and Part VII introduce the more advanced topics of coupled beam dynamics and the description of very intense beams. Part VIII is an exhaustive treatment of radiation from accelerated charges and introduces important sources of coherent radiation such as synchrotrons and free-electron lasers. Part IX collects the appendices gathering useful mathematical and physical formulae, parameters and units. Solutions to many end-of-chapter problems are give...

  7. Computational Physics.

    Science.gov (United States)

    Borcherds, P. H.

    1986-01-01

    Describes an optional course in "computational physics" offered at the University of Birmingham. Includes an introduction to numerical methods and presents exercises involving fast-Fourier transforms, non-linear least-squares, Monte Carlo methods, and the three-body problem. Recommends adding laboratory work into the course in the…

  8. French nuclear physics accelerator opens

    Science.gov (United States)

    Dumé, Belle

    2016-12-01

    A new €140m particle accelerator for nuclear physics located at the French Large Heavy Ion National Accelerator (GANIL) in Caen was inaugurated last month in a ceremony attended by French president François Hollande.

  9. Particle accelerator physics

    CERN Document Server

    Wiedemann, Helmut

    2015-01-01

    This book by Helmut Wiedemann is a well-established, classic text, providing an in-depth and comprehensive introduction to the field of high-energy particle acceleration and beam dynamics. The present 4th edition has been significantly revised, updated and expanded. The newly conceived Part I is an elementary introduction to the subject matter for undergraduate students. Part II gathers the basic tools in preparation of a more advanced treatment, summarizing the essentials of electrostatics and electrodynamics as well as of particle dynamics in electromagnetic fields. Part III is an extensive primer in beam dynamics, followed, in Part IV, by an introduction and description of the main beam parameters and including a new chapter on beam emittance and lattice design. Part V is devoted to the treatment of perturbations in beam dynamics. Part VI then discusses the details of charged particle acceleration. Parts VII and VIII introduce the more advanced topics of coupled beam dynamics and describe very intense bea...

  10. Physics Needs for Future Accelerators

    CERN Document Server

    Lykken, J D

    2000-01-01

    Contents: 1. Prologomena to any meta future physics 1.1 Physics needs for building future accelerators 1.2 Physics needs for funding future accelerators 2. Physics questions for future accelerators 2.1 Crimes and misapprehensions 2.1.1 Organized religion 2.1.2 Feudalism 2.1.3 Trotsky was right 2.2 The Standard Model as an effective field theory 2.3 What is the scale of new physics? 2.4 What could be out there? 2.5 Model-independent conclusions 3. Future accelerators 3.1 What is the physics driving the LHC? 3.2 What is the physics driving the LC? 3.2.1 Higgs physics is golden 3.2.2 LHC won't be sufficient to unravel the new physics as the TeV scale 3.2.3 LC precision measurements can pin down new physics scales 3.3 Why a Neutrino Factory? 3.4 Pushing the energy frontier

  11. Computational Physics

    Science.gov (United States)

    Thijssen, Jos

    2013-10-01

    1. Introduction; 2. Quantum scattering with a spherically symmetric potential; 3. The variational method for the Schrödinger equation; 4. The Hartree-Fock method; 5. Density functional theory; 6. Solving the Schrödinger equation in periodic solids; 7. Classical equilibrium statistical mechanics; 8. Molecular dynamics simulations; 9. Quantum molecular dynamics; 10. The Monte Carlo method; 11. Transfer matrix and diagonalisation of spin chains; 12. Quantum Monte Carlo methods; 13. The infinite element method for partial differential equations; 14. The lattice Boltzmann method for fluid dynamics; 15. Computational methods for lattice field theories; 16. High performance computing and parallelism; Appendix A. Numerical methods; Appendix B. Random number generators; References; Index.

  12. The plasma physics of shock acceleration

    Science.gov (United States)

    Jones, Frank C.; Ellison, Donald C.

    1991-01-01

    The history and theory of shock acceleration is reviewed, paying particular attention to theories of parallel shocks which include the backreaction of accelerated particles on the shock structure. The work that computer simulations, both plasma and Monte Carlo, are playing in revealing how thermal ions interact with shocks and how particle acceleration appears to be an inevitable and necessary part of the basic plasma physics that governs collisionless shocks is discussed. Some of the outstanding problems that still confront theorists and observers in this field are described.

  13. Accelerators, Beams And Physical Review Special Topics - Accelerators And Beams

    Energy Technology Data Exchange (ETDEWEB)

    Siemann, R.H.; /SLAC

    2011-10-24

    Accelerator science and technology have evolved as accelerators became larger and important to a broad range of science. Physical Review Special Topics - Accelerators and Beams was established to serve the accelerator community as a timely, widely circulated, international journal covering the full breadth of accelerators and beams. The history of the journal and the innovations associated with it are reviewed.

  14. Accelerator Physics Code Web Repository

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, F.; Basset, R.; Bellodi, G.; Benedetto, E.; Dorda, U.; Giovannozzi, M.; Papaphilippou, Y.; Pieloni, T.; Ruggiero, F.; Rumolo, G.; Schmidt, F.; Todesco, E.; Zotter, B.W.; /CERN; Payet, J.; /Saclay; Bartolini, R.; /RAL, Diamond; Farvacque, L.; /ESRF, Grenoble; Sen, T.; /Fermilab; Chin, Y.H.; Ohmi, K.; Oide, K.; /KEK, Tsukuba; Furman, M.; /LBL, Berkeley /Oak Ridge /Pohang Accelerator Lab. /SLAC /TRIUMF /Tech-X, Boulder /UC, San Diego /Darmstadt, GSI /Rutherford /Brookhaven

    2006-10-24

    In the framework of the CARE HHH European Network, we have developed a web-based dynamic accelerator-physics code repository. We describe the design, structure and contents of this repository, illustrate its usage, and discuss our future plans, with emphasis on code benchmarking.

  15. ACCELERATION PHYSICS CODE WEB REPOSITORY.

    Energy Technology Data Exchange (ETDEWEB)

    WEI, J.

    2006-06-26

    In the framework of the CARE HHH European Network, we have developed a web-based dynamic accelerator-physics code repository. We describe the design, structure and contents of this repository, illustrate its usage, and discuss our future plans, with emphasis on code benchmarking.

  16. Terascale Computing in Accelerator Science and Technology

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Kwok

    2002-08-21

    We have entered the age of ''terascale'' scientific computing. Processors and system architecture both continue to evolve; hundred-teraFLOP computers are expected in the next few years, and petaFLOP computers toward the end of this decade are conceivable. This ever-increasing power to solve previously intractable numerical problems benefits almost every field of science and engineering and is revolutionizing some of them, notably including accelerator physics and technology. At existing accelerators, it will help us optimize performance, expand operational parameter envelopes, and increase reliability. Design decisions for next-generation machines will be informed by unprecedented comprehensive and accurate modeling, as well as computer-aided engineering; all this will increase the likelihood that even their most advanced subsystems can be commissioned on time, within budget, and up to specifications. Advanced computing is also vital to developing new means of acceleration and exploring the behavior of beams under extreme conditions. With continued progress it will someday become reasonable to speak of a complete numerical model of all phenomena important to a particular accelerator.

  17. Compensation Techniques in Accelerator Physics

    Energy Technology Data Exchange (ETDEWEB)

    Sayed, Hisham Kamal [Old Dominion Univ., Norfolk, VA (United States)

    2011-05-01

    Accelerator physics is one of the most diverse multidisciplinary fields of physics, wherein the dynamics of particle beams is studied. It takes more than the understanding of basic electromagnetic interactions to be able to predict the beam dynamics, and to be able to develop new techniques to produce, maintain, and deliver high quality beams for different applications. In this work, some basic theory regarding particle beam dynamics in accelerators will be presented. This basic theory, along with applying state of the art techniques in beam dynamics will be used in this dissertation to study and solve accelerator physics problems. Two problems involving compensation are studied in the context of the MEIC (Medium Energy Electron Ion Collider) project at Jefferson Laboratory. Several chromaticity (the energy dependence of the particle tune) compensation methods are evaluated numerically and deployed in a figure eight ring designed for the electrons in the collider. Furthermore, transverse coupling optics have been developed to compensate the coupling introduced by the spin rotators in the MEIC electron ring design.

  18. Snowmass 2013 Computing Frontier: Accelerator Science

    CERN Document Server

    Spentzouris, P; Joshi, C; Amundson, J; An, W; Bruhwiler, D L; Cary, J R; Cowan, B; Decyk, V K; Esarey, E; Fonseca, R A; Friedman, A; Geddes, C G R; Grote, D P; Kourbanis, I; Leemans, W P; Lu, W; Mori, W B; Ng, C; Qiang, Ji; Roberts, T; Ryne, R D; Schroeder, C B; Silva, L O; Tsung, F S; Vay, J -L; Vieira, J

    2013-01-01

    This is the working summary of the Accelerator Science working group of the Computing Frontier of the Snowmass meeting 2013. It summarizes the computing requirements to support accelerator technology in both Energy and Intensity Frontiers.

  19. Analytical tools in accelerator physics

    Energy Technology Data Exchange (ETDEWEB)

    Litvinenko, V.N.

    2010-09-01

    This paper is a sub-set of my lectures presented in the Accelerator Physics course (USPAS, Santa Rosa, California, January 14-25, 2008). It is based on my notes I wrote during period from 1976 to 1979 in Novosibirsk. Only few copies (in Russian) were distributed to my colleagues in Novosibirsk Institute of Nuclear Physics. The goal of these notes is a complete description starting from the arbitrary reference orbit, explicit expressions for 4-potential and accelerator Hamiltonian and finishing with parameterization with action and angle variables. To a large degree follow logic developed in Theory of Cyclic Particle Accelerators by A.A.Kolmensky and A.N.Lebedev [Kolomensky], but going beyond the book in a number of directions. One of unusual feature is these notes use of matrix function and Sylvester formula for calculating matrices of arbitrary elements. Teaching the USPAS course motivated me to translate significant part of my notes into the English. I also included some introductory materials following Classical Theory of Fields by L.D. Landau and E.M. Liftsitz [Landau]. A large number of short notes covering various techniques are placed in the Appendices.

  20. Guide to accelerator physics program SYNCH: VAX version 1987. 2

    Energy Technology Data Exchange (ETDEWEB)

    Parsa, Z.; Courant, E.

    1987-01-01

    This guide is written to accommodate users of Accelerator Physics Data Base BNLDAG::DUAO:(PARSA1). It describes the contents of the on line Accelerator Physics data base DUAO:(PARSA1.SYNCH). SYNCH is a computer program used for the design and analysis of synchrotrons, storage rings and beamlines.

  1. CERN Accelerator School: Registration open for Advanced Accelerator Physics course

    CERN Multimedia

    2015-01-01

    Registration is now open for the CERN Accelerator School’s Advanced Accelerator Physics course to be held in Warsaw, Poland from 27 September to 9 October 2015.   The course will be of interest to physicists and engineers who wish to extend their knowledge of accelerator physics. The programme offers core lectures on accelerator physics in the mornings and a practical course with hands-on tuition in the afternoons.  Further information can be found at: http://cas.web.cern.ch/cas/Poland2015/Warsaw-advert.html http://indico.cern.ch/event/361988/

  2. CERN Accelerator School: Registration open for Advanced Accelerator Physics course

    CERN Multimedia

    2015-01-01

    Registration is now open for the CERN Accelerator School’s Advanced Accelerator Physics course to be held in Warsaw, Poland from 27 September to 9 October 2015.   The course will be of interest to physicists and engineers who wish to extend their knowledge of Accelerator Physics. The programme offers core lectures on accelerator physics in the mornings and a practical course with hands-on tuition in the afternoons.  Further information can be found at: http://cas.web.cern.ch/cas/Poland2015/Warsaw-advert.html http://indico.cern.ch/event/361988/

  3. Accelerating Innovation: How Nuclear Physics Benefits Us All

    Science.gov (United States)

    2011-01-01

    Innovation has been accelerated by nuclear physics in the areas of improving our health; making the world safer; electricity, environment, archaeology; better computers; contributions to industry; and training the next generation of innovators.

  4. Computational physics an introduction

    CERN Document Server

    Vesely, Franz J

    1994-01-01

    Author Franz J. Vesely offers students an introductory text on computational physics, providing them with the important basic numerical/computational techniques. His unique text sets itself apart from others by focusing on specific problems of computational physics. The author also provides a selection of modern fields of research. Students will benefit from the appendixes which offer a short description of some properties of computing and machines and outline the technique of 'Fast Fourier Transformation.'

  5. Unifying physics of accelerators, lasers and plasma

    CERN Document Server

    Seryi, Andrei

    2015-01-01

    Unifying Physics of Accelerators, Lasers and Plasma introduces the physics of accelerators, lasers and plasma in tandem with the industrial methodology of inventiveness, a technique that teaches that similar problems and solutions appear again and again in seemingly dissimilar disciplines. This unique approach builds bridges and enhances connections between the three aforementioned areas of physics that are essential for developing the next generation of accelerators.

  6. Acceleration parameters for fluid physics with accelerating bodies

    CSIR Research Space (South Africa)

    Gledhill, Irvy MA

    2016-06-01

    Full Text Available to an acceleration parameter that appears to be new in fluid physics, but is known in cosmology. A selection of cases for rectilinear acceleration has been chosen to illustrate the point that this parameter alone does not govern regimes of flow about significantly...

  7. Health physics practices at research accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, R.H.

    1976-02-01

    A review is given of the uses of particle accelerators in health physics, the text being a short course given at the Health Physics Society Ninth Midyear Topical Symposium in February, 1976. Topics discussed include: (1) the radiation environment of high energy accelerators; (2) dosimetry at research accelerators; (3) shielding; (4) induced activity; (5) environmental impact of high energy accelerators; (6) population dose equivalent calculation; and (7) the application of the ''as low as practicable concept'' at accelerators. (PMA)

  8. Community Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Spentzouris, P.; /Fermilab; Cary, J.; /Tech-X, Boulder; McInnes, L.C.; /Argonne; Mori, W.; /UCLA; Ng, C.; /SLAC; Ng, E.; Ryne, R.; /LBL, Berkeley

    2011-11-14

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization

  9. Commnity Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Spentzouris, Panagiotis; /Fermilab; Cary, John; /Tech-X, Boulder; Mcinnes, Lois Curfman; /Argonne; Mori, Warren; /UCLA; Ng, Cho; /SLAC; Ng, Esmond; Ryne, Robert; /LBL, Berkeley

    2008-07-01

    The design and performance optimization of particle accelerators is essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC1 Accelerator Science and Technology project, the SciDAC2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modeling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multi-physics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  10. Computations in Plasma Physics.

    Science.gov (United States)

    Cohen, Bruce I.; Killeen, John

    1983-01-01

    Discusses contributions of computers to research in magnetic and inertial-confinement fusion, charged-particle-beam propogation, and space sciences. Considers use in design/control of laboratory and spacecraft experiments and in data acquisition; and reviews major plasma computational methods and some of the important physics problems they…

  11. Commnity Petascale Project for Accelerator Science And Simulation: Advancing Computational Science for Future Accelerators And Accelerator Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Spentzouris, Panagiotis; /Fermilab; Cary, John; /Tech-X, Boulder; Mcinnes, Lois Curfman; /Argonne; Mori, Warren; /UCLA; Ng, Cho; /SLAC; Ng, Esmond; Ryne, Robert; /LBL, Berkeley

    2011-10-21

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  12. Space charge physics for particle accelerators

    CERN Document Server

    Hofmann, Ingo

    2017-01-01

    Understanding and controlling the physics of space charge effects in linear and circular proton and ion accelerators are essential to their operation, and to future high-intensity facilities. This book presents the status quo of this field from a theoretical perspective, compares analytical approaches with multi-particle computer simulations and – where available – with experiments. It discusses fundamental concepts of phase space motion, matched beams and modes of perturbation, along with mathematical models of analysis – from envelope to Vlasov-Poisson equations. The main emphasis is on providing a systematic description of incoherent and coherent resonance phenomena; parametric instabilities and sum modes; mismatch and halo; error driven resonances; and emittance exchange due to anisotropy, as well as the role of Landau damping. Their distinctive features are elaborated in the context of numerous sample simulations, and their potential impacts on beam quality degradation and beam loss are discussed....

  13. Advances of Accelerator Physics and Technologies

    CERN Document Server

    1993-01-01

    This volume, consisting of articles written by experts with international repute and long experience, reviews the state of the art of accelerator physics and technologies and the use of accelerators in research, industry and medicine. It covers a wide range of topics, from basic problems concerning the performance of circular and linear accelerators to technical issues and related fields. Also discussed are recent achievements that are of particular interest (such as RF quadrupole acceleration, ion sources and storage rings) and new technologies (such as superconductivity for magnets and RF ca

  14. New accelerators in high-energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Blewett, J.P.

    1982-01-01

    First, I should like to mention a few new ideas that have appeared during the last few years in the accelerator field. A couple are of importance in the design of injectors, usually linear accelerators, for high-energy machines. Then I shall review some of the somewhat sensational accelerator projects, now in operation, under construction or just being proposed. Finally, I propose to mention a few applications of high-energy accelerators in fields other than high-energy physics. I realize that this is a digression from my title but I hope that you will find it interesting.

  15. Accelerating Climate Simulations Through Hybrid Computing

    Science.gov (United States)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  16. FPGA-accelerated simulation of computer systems

    CERN Document Server

    Angepat, Hari; Chung, Eric S; Hoe, James C; Chung, Eric S

    2014-01-01

    To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed f

  17. Non-accelerator particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Steinberg, R.I.; Lane, C.E.

    1991-09-01

    The goals of this research are the experimental testing of fundamental theories of physics such as grand unification and the exploration of cosmic phenomena through the techniques of particle physics. We are working on the MACRO experiment, which employs a large area underground detector to search for grand unification magnetic monopoles and dark matter candidates and to study cosmic ray muons as well as low and high energy neutrinos: the {nu}IMB project, which seeks to refurbish and upgrade the IMB water Cerenkov detector to perform an improved proton decay search together with a long baseline reactor neutrino oscillation experiment using a kiloton liquid scintillator (the Perry experiment); and development of technology for improved liquid scintillators and for very low background materials in support of the MACRO and Perry experiments and for new solar neutrino experiments. 21 refs., 19 figs., 6 tabs.

  18. Non-accelerator particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Steinberg, R.I.

    1990-01-01

    The goals of this research are the experimental testing of fundamental theories of physics such as grand unification and the exploration of cosmic phenomena through the techniques of particle physics. We are currently engaged in construction of the MACRO detector, an Italian-American collaborative research instrument with a total particle acceptance of 10,000 m{sup 2}sr, which will perform a sensitive search for magnetic monopoles using excitation-ionization methods. Other major objective of the MACRO experiment are to search for astrophysical high energy neutrinos expected to be emitted by such objects as Vela X-1, LMC X-4 and SN-1987A and to search for low energy neutrino bursts from gravitational stellar collapse. We are also working on BOREX, a liquid scintillation solar neutrino experiment and GRANDE, a proposed very large area surface detector for astrophysical neutrinos, and on the development of new techniques for liquid scintillation detection.

  19. Non-accelerator particle physics

    Science.gov (United States)

    Steinberg, R. I.; Lane, C. E.

    1991-09-01

    The goals of this research are the experimental testing of fundamental theories of physics such as grand unification and the exploration of cosmic phenomena through the techniques of particle physics. We are working on the MACRO experiment, which employs a large area underground detector, to search for grand unification magnetic monopoles and dark matter candidates and to study cosmic ray muons as well as low and high energy neutrinos. The NuIMB project seeks to: refurbish and upgrade the IMB water Cerenkov detector to perform an improved proton decay search together with a long baseline reactor neutrino oscillation experiment using a kiloton liquid scintillator (the Perry experiment); and to develop technology for improved liquid scintillators, very low background materials in support of the MACRO and Perry experiments, and for new solar neutrino experiments.

  20. Non-accelerator particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Steinberg, R.I.; Lane, C.E.

    1991-08-01

    The goals of this research were the experimental testing of fundamental theories of physics such as grand unification and the exploration of cosmic phenomena through the techniques of particle physics. We have worked on the MACRO experiment, which is employing a large area underground detector to search for grand unification magnetic monopoles and dark matter candidates and to study cosmic ray muons as well as low and high energy neutrinos; the {nu}IMB project, which seeks to refurbish and upgrade the IMB water Cerenkov detector to perform an improved proton decay search together with a long baseline reactor neutrino oscillation experiments using a one kiloton liquid scintillator (the Perry experiment); and development of technology for improved liquid scintillators and for very low background materials in support of the MACRO and Perry experiments and for new solar neutrino experiments.

  1. Computational Physics at Haverford College

    Science.gov (United States)

    Love, Peter

    2009-03-01

    We will describe two new physics courses at Haverford College: Physics/CS 304, Computational Physics, an upper level elective for Physics, CS and Math Majors, and Physics 412, Research in Theoretical and Computational Physics. These courses are designed to extend students experience of physics using computation. They are also part of an interdisciplinary Concentration in Computational Science mounted jointly by the departments of Computer Science, Economics, Biology Chemistry and Mathematics. These courses make extensive use of Python, Scipy , Numpy and Visual Python, and include extensive independent projects. We will describe some results obtained and lessons learned.

  2. Ultimate-gradient accelerators physics and prospects

    CERN Document Server

    Skrinsky, Aleksander Nikolayevich

    1995-01-01

    As introduction, the needs and ways for ultimate acceleration gradients are discussed briefly. The Plasma Wake Field Acceleration is analized in the most important details. The structure of specific plasma oscillations and "high energy driver beam SP-plasma" interaction is presented, including computer simulation of the process. Some pratical ways to introduce the necessary mm-scale bunching in driver beam and to arrange sequential energy multiplication are dicussed. The influence of accelerating beam particle - plasma binary collisions is considered, also. As applications of PWFA, the use of proton super-colliders beams (LHC and Future SC) to drive the "multi particle types" accelerator, and the arrangements for the electron-positron TeV range collider are discussed.

  3. CAS - CERN Accelerator School: Advanced Accelerator Physics Course

    CERN Document Server

    Herr, W

    2014-01-01

    This report presents the proceedings of the Course on Advanced Accelerator Physics organized by the CERN Accelerator School. The course was held in Trondheim, Norway from 18 to 29 August 2013, in collaboration with the Norwegian University of Science and Technology. Its syllabus was based on previous courses and in particular on the course held in Berlin 2003 whose proceedings were published as CERN Yellow Report CERN- 2006-002. The field has seen significant advances in recent years and some topics were presented in a new way and other topics were added. The lectures were supplemented with tutorials on key topics and 14 hours of hands on courses on Optics Design and Corrections, RF Measurement Techniques and Beam Instrumentation and Diagnostics. These courses are a key element of the Advanced Level Course.

  4. Atomic physics using large electrostatic accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Datz, S.

    1989-01-01

    This article surveys some areas of atomic physics using large electro-static accelerators. Brief overviews of ion-atom collisions and ion-solid collisions are followed by a classified listing of recent paper. A single line, correlated electron ion recombination, is chosen to show the recent development of techniques to study various aspects of this phenomenon. 21 refs., 11 figs., 1 tab.

  5. Physics motivations for future CERN accelerators

    CERN Document Server

    de Roeck, A; Gianotti, F; de Roeck, Albert; Ellis, John; Gianotti, Fabiola

    2001-01-01

    We summarize the physics motivations for future accelerators at CERN. We argue that (a) a luminosity upgrade for the LHC could provide good physics return for a relatively modest capital investment, (b) CLIC would provide excellent long-term perspectives within many speculative scenarios for physics beyond the Standard Model, (c) a Very Large Hadron Collider could provide the first opportunity to explore the energy range up to about 30 TeV, (d) a neutrino factory based on a muon storage ring would provide an exciting and complementary scientific programme and a muon collider could be an interesting later option.

  6. Neural computation and particle accelerators research, technology and applications

    CERN Document Server

    D'Arras, Horace

    2010-01-01

    This book discusses neural computation, a network or circuit of biological neurons and relatedly, particle accelerators, a scientific instrument which accelerates charged particles such as protons, electrons and deuterons. Accelerators have a very broad range of applications in many industrial fields, from high energy physics to medical isotope production. Nuclear technology is one of the fields discussed in this book. The development that has been reached by particle accelerators in energy and particle intensity has opened the possibility to a wide number of new applications in nuclear technology. This book reviews the applications in the nuclear energy field and the design features of high power neutron sources are explained. Surface treatments of niobium flat samples and superconducting radio frequency cavities by a new technique called gas cluster ion beam are also studied in detail, as well as the process of electropolishing. Furthermore, magnetic devises such as solenoids, dipoles and undulators, which ...

  7. High Energy Density Physics and Exotic Acceleration Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Cowan, T.; /General Atomics, San Diego; Colby, E.; /SLAC

    2005-09-27

    The High Energy Density and Exotic Acceleration working group took as our goal to reach beyond the community of plasma accelerator research with its applications to high energy physics, to promote exchange with other disciplines which are challenged by related and demanding beam physics issues. The scope of the group was to cover particle acceleration and beam transport that, unlike other groups at AAC, are not mediated by plasmas or by electromagnetic structures. At this Workshop, we saw an impressive advancement from years past in the area of Vacuum Acceleration, for example with the LEAP experiment at Stanford. And we saw an influx of exciting new beam physics topics involving particle propagation inside of solid-density plasmas or at extremely high charge density, particularly in the areas of laser acceleration of ions, and extreme beams for fusion energy research, including Heavy-ion Inertial Fusion beam physics. One example of the importance and extreme nature of beam physics in HED research is the requirement in the Fast Ignitor scheme of inertial fusion to heat a compressed DT fusion pellet to keV temperatures by injection of laser-driven electron or ion beams of giga-Amp current. Even in modest experiments presently being performed on the laser-acceleration of ions from solids, mega-amp currents of MeV electrons must be transported through solid foils, requiring almost complete return current neutralization, and giving rise to a wide variety of beam-plasma instabilities. As keynote talks our group promoted Ion Acceleration (plenary talk by A. MacKinnon), which historically has grown out of inertial fusion research, and HIF Accelerator Research (invited talk by A. Friedman), which will require impressive advancements in space-charge-limited ion beam physics and in understanding the generation and transport of neutralized ion beams. A unifying aspect of High Energy Density applications was the physics of particle beams inside of solids, which is proving to

  8. Theoretical and Experimental Studies in Accelerator Physics

    Energy Technology Data Exchange (ETDEWEB)

    Rosenzweig, James [Univ. of California, Los Angeles, CA (United States). Dept. of Physics and Astronomy

    2017-03-08

    This report describes research supported by the US Dept. of Energy Office of High Energy Physics (OHEP), performed by the UCLA Particle Beam Physics Laboratory (PBPL). The UCLA PBPL has, over the last two decades-plus, played a critical role in the development of advanced accelerators, fundamental beam physics, and new applications enabled by these thrusts, such as new types of accelerator-based light sources. As the PBPL mission is broad it is natural that it has been grown within the context of the accelerator science and technology stewardship of the OHEP. Indeed, steady OHEP support for the program has always been central to the success of the PBPL; it has provided stability, and above all has set the over-arching themes for our research directions, which have producing over 500 publications (>120 in high level journals). While other agency support has grown notably in recent years, permitting more vigorous pursuit of the program, it is transient by comparison. Beyond permitting program growth in a time of flat OHEP budgets, the influence of other agency missions is found in push to adapt advanced accelerator methods to applications, in light of the success the field has had in proof-of-principle experiments supported first by the DoE OHEP. This three-pronged PBPL program — advanced accelerators, fundamental beam physics and technology, and revolutionary applications — has produced a generation of students that have had a profound affect on the US accelerator physics community. PBPL graduates, numbering 28 in total, form a significant population group in the accelerator community, playing key roles as university faculty, scientific leaders in national labs (two have been named Panofsky Fellows at SLAC), and vigorous proponents of industrial application of accelerators. Indeed, the development of advanced RF, optical and magnet technology at the PBPL has led directly to the spin-off company, RadiaBeam Technologies, now a leading industrial accelerator firm

  9. CAS Accelerator Physics held in Erice, Italy

    CERN Multimedia

    CERN Accelerator School

    2013-01-01

    The CERN Accelerator School (CAS) recently organised a specialised course on Superconductivity for Accelerators, held at the Ettore Majorana Foundation and Centre for Scientific Culture in Erice, Italy from 24 April-4 May, 2013.   Photo courtesy of Alessandro Noto, Ettore Majorana Foundation and Centre for Scientific Culture. Following a handful of summary lectures on accelerator physics and the fundamental processes of superconductivity, the course covered a wide range of topics related to superconductivity and highlighted the latest developments in the field. Realistic case studies and topical seminars completed the programme. The school was very successful with 94 participants representing 23 nationalities, coming from countries as far away as Belorussia, Canada, China, India, Japan and the United States (for the first time a young Ethiopian lady, studying in Germany, attended this course). The programme comprised 35 lectures, 3 seminars and 7 hours of case study. The case studies were p...

  10. Developing a framework for predicting upper extremity muscle activities, postures, velocities, and accelerations during computer use: the effect of keyboard use, mouse use, and individual factors on physical exposures.

    Science.gov (United States)

    Bruno Garza, Jennifer L; Catalano, Paul J; Katz, Jeffrey N; Huysmans, Maaike A; Dennerlein, Jack T

    2012-01-01

    Prediction models were developed based on keyboard and mouse use in combination with individual factors that could be used to predict median upper extremity muscle activities, postures, velocities, and accelerations experienced during computer use. In the laboratory, 25 participants performed five simulated computer trials with different amounts of keyboard and mouse use ranging from a highly keyboard-intensive trial to a highly mouse-intensive trial. During each trial, muscle activity and postures of the shoulder and wrist and velocities and accelerations of the wrists, along with percentage keyboard and mouse use, were measured. Four individual factors (hand length, shoulder width, age, and gender) were also measured on the day of data collection. Percentage keyboard and mouse use explained a large amount of the variability in wrist velocities and accelerations. Although hand length, shoulder width, and age were each significant predictors of at least one median muscle activity, posture, velocity, or acceleration exposure, these individual factors explained very little variability in addition to percentage keyboard and mouse use in any of the physical exposures investigated. The amounts of variability explained for models predicting median wrist velocities and accelerations ranged from 75 to 84% but were much lower for median muscle activities and postures (0-50%). RMS errors ranged between 8 to 13% of the range observed. While the predictions for wrist velocities and accelerations may be able to be used to improve exposure assessment for future epidemiologic studies, more research is needed to identify other factors that may improve the predictions for muscle activities and postures.

  11. CAS Accelerator Physics (Ion Sources) in Slovakia

    CERN Multimedia

    CAS School

    2012-01-01

    The CERN Accelerator School (CAS) and the Slovak University of Technology jointly organised a specialised course on ion sources, held at the Hotel Senec, Senec, Slovakia, from 29 May to 8 June, 2012.   Following some background lectures on accelerator physics and the fundamental processes of atomic and plasma physics, the course covered a wide range of topics related to ion sources and highlighted the latest developments in the field. Realistic case studies and topical seminars completed the programme. The school was very successful, with 69 participants representing 25 nationalities. Feedback from the participants was extremely positive, reflecting the high standard of the lectures. The case studies were performed with great enthusiasm and produced some excellent results. In addition to the academic programme, the participants were able to take part in a one-day excursion consisting of a guided tour of Bratislava and free time. A welcome event was held at the Hotel Senec, with s...

  12. Physics Division computer facilities

    Energy Technology Data Exchange (ETDEWEB)

    Cyborski, D.R.; Teh, K.M.

    1995-08-01

    The Physics Division maintains several computer systems for data analysis, general-purpose computing, and word processing. While the VMS VAX clusters are still used, this past year saw a greater shift to the Unix Cluster with the addition of more RISC-based Unix workstations. The main Divisional VAX cluster which consists of two VAX 3300s configured as a dual-host system serves as boot nodes and disk servers to seven other satellite nodes consisting of two VAXstation 3200s, three VAXstation 3100 machines, a VAX-11/750, and a MicroVAX II. There are three 6250/1600 bpi 9-track tape drives, six 8-mm tapes and about 9.1 GB of disk storage served to the cluster by the various satellites. Also, two of the satellites (the MicroVAX and VAX-11/750) have DAPHNE front-end interfaces for data acquisition. Since the tape drives are accessible cluster-wide via a software package, they are, in addition to replay, used for tape-to-tape copies. There is however, a satellite node outfitted with two 8 mm drives available for this purpose. Although not part of the main cluster, a DEC 3000 Alpha machine obtained for data acquisition is also available for data replay. In one case, users reported a performance increase by a factor of 10 when using this machine.

  13. First accelerator-based physics of 2014

    CERN Multimedia

    Katarina Anthony

    2014-01-01

    Experiments in the East Area received their first beams from the PS this week. Theirs is CERN's first accelerator-based physics since LS1 began last year.   For the East Area, the PS performs a so-called slow extraction, where beam is extracted during many revolution periods (the time it take for particles to go around the PS, ~2.1 μs). The yellow line shows the circulating beam current in the PS, decreasing slowly during the slow extraction, which lasts 350 ms. The green line is the measured proton intensity in the transfer line toward the East Area target. Although LHC physics is still far away, we can now confirm that the injectors are producing physics! In the East Area - the experimental area behind the PS - the T9 and T10 beam lines are providing beams for physics. These beam lines serve experiments such as AIDA - which looks at new detector solutions for future accelerators - and the ALICE Inner Tracking System - which tests components for the ALICE experiment. &qu...

  14. Detonation Type Ram Accelerator: A Computational Investigation

    Directory of Open Access Journals (Sweden)

    Sunil Bhat

    2000-01-01

    Full Text Available An analytical model explaining the functional characteristics of detonation type ram accelerator is presented. Major flow processes, namely, (i supersonic flow over the cone of the projectile, (ii initiation ofconical shock wave and its reflection from the tube wall, (iii supersonic combustion, and (iv expansion wave and its reflection are modelled. Taylor-Maccoll approach is adopted for modellingthe flow over the cone of the projectile. Shock reflection is treated in accordance with wave angle theorytor flows over the wedge. Prandtl-Mayer analysis is used to model the expansion wave and its reflection.Steady one-dimensional flow with heat transfer along with Rayleigh line equation for perfect gases isused to model supersonic combustion. A computer code is developed to compute the thrust producedby combustion of gases. Ballistic parameters like thrust-pressure ratio and ballistic efficiency of the accelerator are evaluated and their maximum values are 0.032 and 0.068, respectively. The code indicates possibility ofachieving high velocity of 7 km/s on utilising gaseous mixture of 2H2+O2 in the operation.Velocity range suitable for operation of the accelerator lies between 3.8 - 7.0 km/s. Maximum thrust valueis 33721 N which corresponds to the projectile velocity of 5 km/s.

  15. GPU-accelerated computation of electron transfer.

    Science.gov (United States)

    Höfinger, Siegfried; Acocella, Angela; Pop, Sergiu C; Narumi, Tetsu; Yasuoka, Kenji; Beu, Titus; Zerbetto, Francesco

    2012-11-05

    Electron transfer is a fundamental process that can be studied with the help of computer simulation. The underlying quantum mechanical description renders the problem a computationally intensive application. In this study, we probe the graphics processing unit (GPU) for suitability to this type of problem. Time-critical components are identified via profiling of an existing implementation and several different variants are tested involving the GPU at increasing levels of abstraction. A publicly available library supporting basic linear algebra operations on the GPU turns out to accelerate the computation approximately 50-fold with minor dependence on actual problem size. The performance gain does not compromise numerical accuracy and is of significant value for practical purposes. Copyright © 2012 Wiley Periodicals, Inc.

  16. CAS Introduction to Accelerator Physics in Spain

    CERN Multimedia

    CERN Bulletin

    2012-01-01

    The CERN Accelerator School (CAS) and the University of Granada jointly organised a course called "Introduction to Accelerator Physics" in Granada, Spain, from 28 October to 9 November, 2012.   The course attracted over 200 applicants, of whom 139 were selected to attend. The students were of 25 different nationalities, coming from countries as far away as Australia, China, Guatemala and India. The intensive programme comprised 38 lectures, 3 seminars, 4 tutorials where the students were split into three groups, a poster session and 7 hours of guided and private study. Feedback from the students was very positive, praising the expertise of the lecturers, as well as the high standard and quality of their lectures. CERN's Director-General, Rolf Heuer, gave a public lecture at the Parque de las Ciencias entitled "The Large Hadron Collider: Unveiling the Universe". In addition to the academic programme, the students had the opportunity to visit the well...

  17. Lecture Notes on Topics in Accelerator Physics

    Energy Technology Data Exchange (ETDEWEB)

    Chao, Alex W.

    2002-11-15

    These are lecture notes that cover a selection of topics, some of them under current research, in accelerator physics. I try to derive the results from first principles, although the students are assumed to have an introductory knowledge of the basics. The topics covered are: (1) Panofsky-Wenzel and Planar Wake Theorems; (2) Echo Effect; (3) Crystalline Beam; (4) Fast Ion Instability; (5) Lawson-Woodward Theorem and Laser Acceleration in Free Space; (6) Spin Dynamics and Siberian Snakes; (7) Symplectic Approximation of Maps; (8) Truncated Power Series Algebra; and (9) Lie Algebra Technique for nonlinear Dynamics. The purpose of these lectures is not to elaborate, but to prepare the students so that they can do their own research. Each topic can be read independently of the others.

  18. Lecture Notes on Topics in Accelerator Physics

    CERN Document Server

    Chao, A W

    2002-01-01

    These are lecture notes that cover a selection of topics, some of them under current research, in accelerator physics. I try to derive the results from first principles, although the students are assumed to have an introductory knowledge of the basics. The topics covered are: (1) Panofsky-Wenzel and Planar Wake Theorems; (2) Echo Effect; (3) Crystalline Beam; (4) Fast Ion Instability; (5) Lawson-Woodward Theorem and Laser Acceleration in Free Space; (6) Spin Dynamics and Siberian Snakes; (7) Symplectic Approximation of Maps; (8) Truncated Power Series Algebra; and (9) Lie Algebra Technique for nonlinear Dynamics. The purpose of these lectures is not to elaborate, but to prepare the students so that they can do their own research. Each topic can be read independently of the others.

  19. Better physical activity classification using smartphone acceleration sensor.

    Science.gov (United States)

    Arif, Muhammad; Bilal, Mohsin; Kattan, Ahmed; Ahamed, S Iqbal

    2014-09-01

    Obesity is becoming one of the serious problems for the health of worldwide population. Social interactions on mobile phones and computers via internet through social e-networks are one of the major causes of lack of physical activities. For the health specialist, it is important to track the record of physical activities of the obese or overweight patients to supervise weight loss control. In this study, acceleration sensor present in the smartphone is used to monitor the physical activity of the user. Physical activities including Walking, Jogging, Sitting, Standing, Walking upstairs and Walking downstairs are classified. Time domain features are extracted from the acceleration data recorded by smartphone during different physical activities. Time and space complexity of the whole framework is done by optimal feature subset selection and pruning of instances. Classification results of six physical activities are reported in this paper. Using simple time domain features, 99 % classification accuracy is achieved. Furthermore, attributes subset selection is used to remove the redundant features and to minimize the time complexity of the algorithm. A subset of 30 features produced more than 98 % classification accuracy for the six physical activities.

  20. Physical computation and cognitive science

    CERN Document Server

    Fresco, Nir

    2014-01-01

    This book presents a study of digital computation in contemporary cognitive science. Digital computation is a highly ambiguous concept, as there is no common core definition for it in cognitive science. Since this concept plays a central role in cognitive theory, an adequate cognitive explanation requires an explicit account of digital computation. More specifically, it requires an account of how digital computation is implemented in physical systems. The main challenge is to deliver an account encompassing the multiple types of existing models of computation without ending up in pancomputationalism, that is, the view that every physical system is a digital computing system. This book shows that only two accounts, among the ones examined by the author, are adequate for explaining physical computation. One of them is the instructional information processing account, which is developed here for the first time.   “This book provides a thorough and timely analysis of differing accounts of computation while adv...

  1. Accelerator physics and technology research toward future multi-MW proton accelerators

    CERN Document Server

    Shiltsev, V; Romanenko, A; Valishev, A; Zwaska, R

    2015-01-01

    Recent P5 report indicated the accelerator-based neutrino and rare decay physics research as a centrepiece of the US domestic HEP program. Operation, upgrade and development of the accelerators for the near-term and longer-term particle physics program at the Intensity Frontier face formidable challenges. Here we discuss accelerator physics and technology research toward future multi-MW proton accelerators.

  2. Computational physics: a perspective.

    Science.gov (United States)

    Stoneham, A M

    2002-06-15

    Computing comprises three distinct strands: hardware, software and the ways they are used in real or imagined worlds. Its use in research is more than writing or running code. Having something significant to compute and deploying judgement in what is attempted and achieved are especially challenging. In science or engineering, one must define a central problem in computable form, run such software as is appropriate and, last but by no means least, convince others that the results are both valid and useful. These several strands are highly interdependent. A major scientific development can transform disparate aspects of information and computer technologies. Computers affect the way we do science, as well as changing our personal worlds. Access to information is being transformed, with consequences beyond research or even science. Creativity in research is usually considered uniquely human, with inspiration a central factor. Scientific and technological needs are major forces in innovation, and these include hardware and software opportunities. One can try to define the scientific needs for established technologies (atomic energy, the early semiconductor industry), for rapidly developing technologies (advanced materials, microelectronics) and for emerging technologies (nanotechnology, novel information technologies). Did these needs define new computing, or was science diverted into applications of then-available codes? Regarding credibility, why is it that engineers accept computer realizations when designing engineered structures, whereas predictive modelling of materials has yet to achieve industrial confidence outside very special cases? The tensions between computing and traditional science are complex, unpredictable and potentially powerful.

  3. Pulsed power accelerator for material physics experiments

    Directory of Open Access Journals (Sweden)

    D. B. Reisman

    2015-09-01

    Full Text Available We have developed the design of Thor: a pulsed power accelerator that delivers a precisely shaped current pulse with a peak value as high as 7 MA to a strip-line load. The peak magnetic pressure achieved within a 1-cm-wide load is as high as 100 GPa. Thor is powered by as many as 288 decoupled and transit-time isolated bricks. Each brick consists of a single switch and two capacitors connected electrically in series. The bricks can be individually triggered to achieve a high degree of current pulse tailoring. Because the accelerator is impedance matched throughout, capacitor energy is delivered to the strip-line load with an efficiency as high as 50%. We used an iterative finite element method (FEM, circuit, and magnetohydrodynamic simulations to develop an optimized accelerator design. When powered by 96 bricks, Thor delivers as much as 4.1 MA to a load, and achieves peak magnetic pressures as high as 65 GPa. When powered by 288 bricks, Thor delivers as much as 6.9 MA to a load, and achieves magnetic pressures as high as 170 GPa. We have developed an algebraic calculational procedure that uses the single brick basis function to determine the brick-triggering sequence necessary to generate a highly tailored current pulse time history for shockless loading of samples. Thor will drive a wide variety of magnetically driven shockless ramp compression, shockless flyer plate, shock-ramp, equation of state, material strength, phase transition, and other advanced material physics experiments.

  4. Cosmic Acceleration, Dark Energy and Fundamental Physics

    CERN Document Server

    Turner, Michael Stanley

    2007-01-01

    A web of interlocking observations has established that the expansion of the Universe is speeding up and not slowing, revealing the presence of some form of repulsive gravity. Within the context of general relativity the cause of cosmic acceleration is a highly elastic (p\\sim -rho), very smooth form of energy called ``dark energy'' accounting for about 75% of the Universe. The ``simplest'' explanation for dark energy is the zero-point energy density associated with the quantum vacuum; however, all estimates for its value are many orders-of-magnitude too large. Other ideas for dark energy include a very light scalar field or a tangled network of topological defects. An alternate explanation invokes gravitational physics beyond general relativity. Observations and experiments underway and more precise cosmological measurements and laboratory experiments planned for the next decade will test whether or not dark energy is the quantum energy of the vacuum or something more exotic, and whether or not general relati...

  5. Tevatron accelerator physics and operation highlights

    CERN Document Server

    Valishev, A

    2011-01-01

    The performance of the Tevatron collider demonstrated continuous growth over the course of Run II, with the peak luminosity reaching 4\\times1032 cm-2 s-1, and the weekly integration rate exceeding 70 pb-1. This report presents a review of the most important advances that contributed to this performance improvement, including beam dynamics modeling, precision optics measurements and stability control, implementation of collimation during low-beta squeeze. Algorithms employed for optimization of the luminosity integration are presented and the lessons learned from high-luminosity operation are discussed. Studies of novel accelerator physics concepts at the Tevatron are described, such as the collimation techniques using crystal collimator and hollow electron beam, and compensation of beam-beam effects.

  6. Ultimate physical limits to computation

    CERN Document Server

    Lloyd, S

    1999-01-01

    Computers are physical systems: what they can and cannot do is dictated by the laws of physics. In particular, the speed with which a physical device can process information is limited by its energy, and the amount of information that it can process is limited by the number of degrees of freedom it possesses. The way in which it processes information is determined by the forces of nature that the computer has at its disposal. This paper explores the fundamental physical limits of computation as determined by the speed of light c, the quantum scale as given by Planck's constant h, and the gravitational constant G. As an example, quantitative bounds are put to the computational power of an `ultimate laptop' with a mass of one kilogram confined to a volume of one liter.

  7. The Physics of Quantum Computation

    Science.gov (United States)

    Falci, Giuseppe; Paladino, Elisabette

    2015-10-01

    Quantum Computation has emerged in the past decades as a consequence of down-scaling of electronic devices to the mesoscopic regime and of advances in the ability of controlling and measuring microscopic quantum systems. QC has many interdisciplinary aspects, ranging from physics and chemistry to mathematics and computer science. In these lecture notes we focus on physical hardware, present day challenges and future directions for design of quantum architectures.

  8. Computational Methods in Plasma Physics

    CERN Document Server

    Jardin, Stephen

    2010-01-01

    Assuming no prior knowledge of plasma physics or numerical methods, Computational Methods in Plasma Physics covers the computational mathematics and techniques needed to simulate magnetically confined plasmas in modern magnetic fusion experiments and future magnetic fusion reactors. Largely self-contained, the text presents the basic concepts necessary for the numerical solution of partial differential equations. Along with discussing numerical stability and accuracy, the author explores many of the algorithms used today in enough depth so that readers can analyze their stability, efficiency,

  9. Quantum computing classical physics.

    Science.gov (United States)

    Meyer, David A

    2002-03-15

    In the past decade, quantum algorithms have been found which outperform the best classical solutions known for certain classical problems as well as the best classical methods known for simulation of certain quantum systems. This suggests that they may also speed up the simulation of some classical systems. I describe one class of discrete quantum algorithms which do so--quantum lattice-gas automata--and show how to implement them efficiently on standard quantum computers.

  10. LHC accelerator physics and technology challenges

    CERN Document Server

    Evans, Lyndon R

    1998-01-01

    The Large Hadron Collider (LHC) incorporates many technological innovations in order to achieve its design objectives at the lowest cost. The two-in-one magnet design, with the two magnetic channels i ntegrated into a common yoke, has proved to be an economical alternative to two separate rings and allows enough free space in the existing (LEP) tunnel for a possible future re-installation of a lept on ring for e-p physics. In order to achieve the design energy of 7 TeV per beam, with a dipole field of 8.3 T, the superconducting magnet system must operate in superfluid helium at 1.9 K. The LHC wi ll be the first hadron machine to produce appreciable synchrotron radiation which, together with the heat load due to image currents, has to be absorbed at cryogenic temperatures. A brief review of th e machine design is given and some of the main technological and accelerator physics issues are discussed.

  11. CAS Accelerator Physics (RF for Accelerators) in Denmark

    CERN Document Server

    Barbara Strasser

    2010-01-01

    The CERN Accelerator School (CAS) and Aarhus University jointly organised a specialised course on RF for Accelerators, at the Ebeltoft Strand Hotel, Denmark from 8 to 17 June 2010.   Caption The challenging programme focused on the introduction of the underlying theory, the study and the performance of the different components involved in RF systems, the RF gymnastics and RF measurements and diagnostics. This academic part was supplemented with three afternoons dedicated to practical hands-on exercises. The school was very successful, with 100 participants representing 25 nationalities. Feedback from the participants was extremely positive, praising the expertise and enthusiasm of the lecturers, as well as the high standard and excellent quality of their lectures. In addition to the academic programme, the participants were able to visit a small industrial exhibition organised by Aarhus University and take part in a one-day excursion consisting of a visit of the accelerators operated ...

  12. Computational Physics Across the Disciplines

    Science.gov (United States)

    Crespi, Vincent; Lammert, Paul; Engstrom, Tyler; Owen, Ben

    2011-03-01

    In this informal talk, I will present two case studies of the unexpected convergence of computational techniques across disciplines. First, the marriage of neutron star astrophysics and the materials theory of the mechanical and thermal response of crystalline solids. Although the lower reaches of a neutron star host exotic nuclear physics, the upper few meters of the crust exist in a regime that is surprisingly amenable to standard molecular dynamics simulation, albeit in a physical regime of density order of magnitude of orders of magnitude different from those familiar to most condensed matter folk. Computational results on shear strength, thermal conductivity, and other properties here are very relevant to possible gravitational wave signals from these sources. The second example connects not two disciplines of computational physics, but experimental and computational physics, and not from the traditional direction of computational progressively approaching experiment. Instead, experiment is approaching computation: regular lattices of single-domain magnetic islands whose magnetic microstates can be exhaustively enumerated by magnetic force microscopy. There resulting images of island magnetization patterns look essentially like the results of Monte Carlo simulations of Ising systems... statistical physics with the microstate revealed.

  13. Basic concepts in computational physics

    CERN Document Server

    A Stickler, Benjamin

    2014-01-01

    With the development of ever more powerful computers a new branch of physics and engineering evolved over the last few decades: Computer Simulation or Computational Physics. It serves two main purposes: - Solution of complex mathematical problems such as, differential equations, minimization/optimization, or high-dimensional sums/integrals. - Direct simulation of physical processes, as for instance, molecular dynamics or Monte-Carlo simulation of physical/chemical/technical processes. Consequently, the book is divided into two main parts: Deterministic methods and stochastic methods. Based on concrete problems, the first part discusses numerical differentiation and integration, and the treatment of ordinary differential equations. This is augmented by notes on the numerics of partial differential equations. The second part discusses the generation of random numbers, summarizes the basics of stochastics which is then followed by the introduction of various Monte-Carlo (MC) methods. Specific emphasis is on MARK...

  14. LHC Accelerator Physics and Technology Challenges

    CERN Document Server

    Evans, Lyndon R

    1999-01-01

    The Large Hadron Collider (LHC) incorporates many technological innovations in order to achieve its design objectives at the lowest cost. The two-in-one magnet design, with the two magnetic channels integrated into a common yoke, has proved to be an economical alternative to two separate rings and allows enough free space in the existing (LEP) tunnel for a possible future re-installation of a lepton ring for e-p physics. In order to achieve the design energy of 7 TeV per beam, with a dipole field of 8.3 T, the superconducting magnet system must operate in superfluid helium at 1.9 K. This requires further development of cold compressors similar to those first used at CEBAF. The LHC will be the first hadron machine to produce appreciable synchrotron radiation which, together with the heat load due to image currents, has to be absorbed at cryogenic temperatures. Finally, the LHC is the first major CERN accelerator project built in collaboration with other laboratories. A brief review of the machine design is giv...

  15. Introduction to the overall physics design of CSNS accelerators

    Institute of Scientific and Technical Information of China (English)

    WANG Sheng; FANG Shou-Xian; FU Shi-Nian; LIU Wei-Bin; OUYANG Hua-Fu; QIN Qing; TANG Jing-Yu; WEI Jie

    2009-01-01

    The China Spallation Neutron Source(CSNS)is an accelerator-based facility.The accelerator of CSNS consists of a low energy linac,a Rapid Cycling Synchrotron(RCS)and two beam transport lines.The overall physits design of CSNS accelerator is described,including the design principle,the choice of the main parameters and design of each part of accelerators.The key problems of the physics design,such as beam loss and control,are also discussed.The interface between the different parts of accelerator,as well as between accelerator and target,are introduced.

  16. Accelerated Matrix Element Method with Parallel Computing

    CERN Document Server

    Schouten, Doug; Stelzer, Bernd

    2014-01-01

    The matrix element method utilizes ab initio calculations of probability densities as powerful discriminants for processes of interest in experimental particle physics. The method has already been used successfully at previous and current collider experiments. However, the computational complexity of this method for final states with many particles and degrees of freedom sets it at a disadvantage compared to supervised classification methods such as decision trees, k nearest-neighbour, or neural networks. This note presents a concrete implementation of the matrix element technique using graphics processing units. Due to the intrinsic parallelizability of multidimensional integration, dramatic speedups can be readily achieved, which makes the matrix element technique viable for general usage at collider experiments.

  17. Computational/HPC Physics Education

    Science.gov (United States)

    Landau, Rubin H.

    1997-08-01

    The Physics group in NACSE (an NSF Metacenter Regional Alliance) has developed a variety of materials to be used in computational physics education and to assist working scientists and engineers. Our emphasis is to exploit Web technology to better teach about and improve the use of HPC resources in physics. We will demonstrate multimedia, interactive Web tutorials (http://nacphy.physics.orst.edu/ (Wiley, 1997). Also demonstrated will be tutorials to assist physicists with visualizations, HPC library use, PVM, and, in particular, Coping with Unix, an Interactive Survival Kit for Scientists. These latter tutorials use some special Web technology (Webterm) we developed which makes it possible to connect to a remote Unix machine and follow the lessons from any Web browser supporting Java --- even browsers on non-Unix computers such as PCs or Macs.

  18. Basic concepts in computational physics

    CERN Document Server

    Stickler, Benjamin A

    2016-01-01

    This new edition is a concise introduction to the basic methods of computational physics. Readers will discover the benefits of numerical methods for solving complex mathematical problems and for the direct simulation of physical processes. The book is divided into two main parts: Deterministic methods and stochastic methods in computational physics. Based on concrete problems, the first part discusses numerical differentiation and integration, as well as the treatment of ordinary differential equations. This is extended by a brief introduction to the numerics of partial differential equations. The second part deals with the generation of random numbers, summarizes the basics of stochastics, and subsequently introduces Monte-Carlo (MC) methods. Specific emphasis is on MARKOV chain MC algorithms. The final two chapters discuss data analysis and stochastic optimization. All this is again motivated and augmented by applications from physics. In addition, the book offers a number of appendices to provide the read...

  19. The Role of Computing in High-Energy Physics.

    Science.gov (United States)

    Metcalf, Michael

    1983-01-01

    Examines present and future applications of computers in high-energy physics. Areas considered include high-energy physics laboratories, accelerators, detectors, networking, off-line analysis, software guidelines, event sizes and volumes, graphics applications, event simulation, theoretical studies, and future trends. (JN)

  20. The physics of accelerator driven sub-critical reactors

    Indian Academy of Sciences (India)

    S B Degweker; Biplab Ghosh; Anil Bajpal; S D Pranjape

    2007-02-01

    In recent years, there has been an increasing worldwide interest in accelerator driven systems (ADS) due to their perceived superior safety characteristics and their potential for burning actinides and long-lived fission products. Indian interest in ADS has an additional dimension, which is related to our planned large-scale thorium utilization for future nuclear energy generation. The physics of ADS is quite different from that of critical reactors. As such, physics studies on ADS reactors are necessary for gaining an understanding of these systems. Development of theoretical tools and experimental facilities for studying the physics of ADS reactors constitute important aspect of the ADS development program at BARC. This includes computer codes for burnup studies based on transport theory and Monte Carlo methods, codes for studying the kinetics of ADS and sub-critical facilities driven by 14 MeV neutron generators for ADS experiments and development of sub-criticality measurement methods. The paper discusses the physics issues specific to ADS reactors and presents the status of the reactor physics program and some of the ADS concepts under study.

  1. The Influence of Accelerator Science on Physics Research

    Science.gov (United States)

    Haussecker, Enzo F.; Chao, Alexander W.

    2011-06-01

    We evaluate accelerator science in the context of its contributions to the physics community. We address the problem of quantifying these contributions and present a scheme for a numerical evaluation of them. We show by using a statistical sample of important developments in modern physics that accelerator science has influenced 28% of post-1938 physicists and also 28% of post-1938 physics research. We also examine how the influence of accelerator science has evolved over time, and show that on average it has contributed to a physics Nobel Prize-winning research every 2.9 years.

  2. Operational aspects of experimental accelerator physics

    Energy Technology Data Exchange (ETDEWEB)

    Decker, G.A.

    1995-07-01

    During the normal course of high energy storage ring operations, it is customary for blocks of time to be allotted to something called ``machine studies,`` or more simply, just ``studies.`` It is during these periods of time that observations and measurement of accelerator behavior are actually performed. Almost invariably these studies are performed in support of normal machine operations. The machine physicist is either attempting to improve machine performance, or more often trying to recover previously attained ``good`` operation, for example after an extended machine down period. For the latter activity, a good portion of machine studies time is usually devoted to ``beam tuning`` activities: those standard measurements and adjustments required to recover good operations. Before continuing, please note that this paper is not intended to be comprehensive. It is intended solely to reflect one accelerator physicist`s impressions as to what goes on in an accelerator control room. Many topics are discussed, some in more detail than others, and it is not the intention that the techniques described herein be applied verbatim to any existing accelerator. It is hoped,, however, that by reading through the various sections, scientists, including accelerator physicists, engineers, and accelerator beam users, will come to appreciate the types of operations that are required to make an accelerator work.

  3. Computer tools in particle physics

    CERN Document Server

    Vicente, Avelino

    2015-01-01

    The field of particle physics is living very exciting times with a plethora of experiments looking for new physics in complementary ways. This has made increasingly necessary to obtain precise predictions in new physics models in order to be ready for a discovery that might be just around the corner. However, analyzing new models and studying their phenomenology can be really challenging. Computing mass matrices, interaction vertices and decay rates is already a tremendous task. In addition, obtaining predictions for the dark matter relic density and its detection prospects, computing flavor observables or estimating the LHC reach in certain collider signals constitutes quite a technical work due to the precision level that is currently required. For this reason, computer tools such as SARAH, MicrOmegas, MadGraph, SPheno or FlavorKit have become quite popular, and many physicists use them on a daily basis. In this course we will learn how to use these computer tools to explore new physics models and get robus...

  4. Computational Examination of Parameters Influencing Practicability of Ram Accelerator

    Directory of Open Access Journals (Sweden)

    Sunil Bhat

    2004-07-01

    Full Text Available The problems concerning practicability aspects of a ram accelerator, such as intense in-bore projectile ablation, large accelerator tube length to achieve high projectile muzzle velocity, and high entry velocity of projectile in the accelerator tube for starting the accelerator have been examined. Computational models of the processes like phenomenon of projectile ablation, flow in the aero-window used as accelerator tube-end closure device in case of high drive gas filling pressure in the ram accelerator tube have been presented. New projectile design to minimise the starting velocity of the ram accelerator is discussed. Possibility of deployment of ram accelerator in the defence-oriented role has been investigated to utilise its high velocity potential.

  5. Physical Intelligence and Thermodynamic Computing

    Directory of Open Access Journals (Sweden)

    Robert L. Fry

    2017-03-01

    Full Text Available This paper proposes that intelligent processes can be completely explained by thermodynamic principles. They can equally be described by information-theoretic principles that, from the standpoint of the required optimizations, are functionally equivalent. The underlying theory arises from two axioms regarding distinguishability and causality. Their consequence is a theory of computation that applies to the only two kinds of physical processes possible—those that reconstruct the past and those that control the future. Dissipative physical processes fall into the first class, whereas intelligent ones comprise the second. The first kind of process is exothermic and the latter is endothermic. Similarly, the first process dumps entropy and energy to its environment, whereas the second reduces entropy while requiring energy to operate. It is shown that high intelligence efficiency and high energy efficiency are synonymous. The theory suggests the usefulness of developing a new computing paradigm called Thermodynamic Computing to engineer intelligent processes. The described engineering formalism for the design of thermodynamic computers is a hybrid combination of information theory and thermodynamics. Elements of the engineering formalism are introduced in the reverse-engineer of a cortical neuron. The cortical neuron provides perhaps the simplest and most insightful example of a thermodynamic computer possible. It can be seen as a basic building block for constructing more intelligent thermodynamic circuits.

  6. Information technology and computational physics

    CERN Document Server

    Kóczy, László; Mesiar, Radko; Kacprzyk, Janusz

    2017-01-01

    A broad spectrum of modern Information Technology (IT) tools, techniques, main developments and still open challenges is presented. Emphasis is on new research directions in various fields of science and technology that are related to data analysis, data mining, knowledge discovery, information retrieval, clustering and classification, decision making and decision support, control, computational mathematics and physics, to name a few. Applications in many relevant fields are presented, notably in telecommunication, social networks, recommender systems, fault detection, robotics, image analysis and recognition, electronics, etc. The methods used by the authors range from high level formal mathematical tools and techniques, through algorithmic and computational tools, to modern metaheuristics.

  7. CAS Introduction to Accelerator Physics in Bulgaria

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    The CERN Accelerator School (CAS) and the Institute for Nuclear Research & Nuclear Energy (INRNE – Bulgarian Academy of Sciences) jointly organised a course on Introduction to Accelerators, at the Grand Hotel Varna, Bulgaria, from 19 September to 1 October, 2010.   CERN Accelerator School group photo. The course was extremely well attended with 109 participants representing 34 different nationalities, coming from countries as far away as Australia, Canada and Vietnam. The intensive programme comprised 39 lectures, 3 seminars, 4 tutorials where the students were split into three groups, a poster session where students could present their own work, and 7 hours of guided and private study. Feedback from the participants was extremely positive, praising the expertise and enthusiasm of the lecturers, as well as the high standard and excellent quality of their lectures. For the first time at CAS, the CERN Director-General, Rolf Heuer, visited the school and presented a seminar entitled...

  8. Berkeley Lab Computing Sciences: Accelerating Scientific Discovery

    OpenAIRE

    Hules, John A.

    2009-01-01

    Scientists today rely on advances in computer science, mathematics, and computational science, as well as large-scale computing and networking facilities, to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab's Computing Sciences organization researches, develops, and deploys new tools and technologies to meet these needs and to advance research in such areas as global climate change, combustion, fusion energy, nanotechnology, biology, and astrophysics.

  9. Electromagnetic Physics Models for Parallel Computing Architectures

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  10. Neutrino Physics with Accelerator Driven Subcritical Reactors

    CERN Document Server

    Ciuffoli, Emilio; Zhao, Fengyi

    2015-01-01

    Accelerator driven system (ADS) subcritical nuclear reactors are under development around the world. They will be intense sources of free, 30-50 MeV antimuon decay at rest antimuon neutrinos. These ADS reactor neutrinos can provide a robust test of the LSND anomaly and a precise measurement of the leptonic CP-violating phase delta, including sign(cos(delta)). The first phase of many ADS programs includes the construction of a low energy, high intensity proton or deuteron accelerator, which can yield competitive bounds on sterile neutrinos.

  11. Handbook of accelerator physics and engineering

    CERN Document Server

    Mess, Karl Hubert; Tigner, Maury; Zimmermann, Frank

    2013-01-01

    Edited by internationally recognized authorities in the field, this expanded and updated new edition of the bestselling Handbook, containing more than 100 new articles, is aimed at the design and operation of modern particle accelerators. It is intended as a vade mecum for professional engineers and physicists engaged in these subjects. With a collection of more than 2000 equations, 300 illustrations and 500 graphs and tables, here one will find, in addition to the common formulae of previous compilations, hard-to-find, specialized formulae, recipes and material data pooled from the lifetime experience of many of the world's most able practitioners of the art and science of accelerators.

  12. Accelerator physics analysis with interactive tools

    Energy Technology Data Exchange (ETDEWEB)

    Holt, J.A.; Michelotti, L.

    1993-05-01

    Work is in progress on interactive tools for linear and nonlinear accelerator design, analysis, and simulation using X-based graphics. The BEAMLINE and MXYZPTLK class libraries, were used with an X Windows graphics library to build a program for interactively editing lattices and studying their properties.

  13. Applications of accelerator mass spectrometry to nuclear physics and astrophysics

    CERN Document Server

    Guo, Z Y

    2002-01-01

    As an ultra high sensitive analyzing method, accelerator mass spectrometry is playing an important role in the studies of nuclear physics and astrophysics. The accelerator mass spectrometry (AMS) applications in searching for violation of Pauli exclusion principle and study on supernovae are discussed as examples

  14. Physical properties of maxillofacial elastomers under conditions of accelerated aging.

    Science.gov (United States)

    Yu, R; Koran, A; Craig, R G

    1980-06-01

    The stability of the physical properties of various commercially available maxillofacial prosthetic materials was evaluated with the use of an accelerated aging chamber. The tensile strength, maximum percent elongation, shear strength, tear energy, and Shore A hardness were determined before and after accelerated aging. Results indicate that silicone 44210, a RTV rubber, is a promising elastomer for maxillofacial application.

  15. High Energy Density Physics and Exotic Acceleration Schemes

    Science.gov (United States)

    Cowan, Thomas; Colby, Eric

    2002-12-01

    We summarize the reported results and the principal technical discussions that occurred in our Working Group on High Energy Density Physics and Exotic Acceleration Schemes at the 2002 workshop on Advanced Accelerator Concepts at the Mandalay Beach resort, June 22-28, 2002.

  16. Software Accelerates Computing Time for Complex Math

    Science.gov (United States)

    2014-01-01

    Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.

  17. Scientific computing with multicore and accelerators

    CERN Document Server

    Kurzak, Jakub; Dongarra, Jack

    2010-01-01

    Dense Linear Algebra Implementing Matrix Multiplication on the Cell B.E, Wesley Alvaro, Jakub Kurzak, and Jack DongarraImplementing Matrix Factorizations on the Cell BE, Jakub Kurzak and Jack DongarraDense Linear Algebra for Hybrid GPU-Based Systems, Stanimire Tomov and Jack DongarraBLAS for GPUs, Rajib Nath, Stanimire Tomov, and Jack DongarraSparse Linear Algebra Sparse Matrix-Vector Multiplication on Multicore and Accelerators, Samuel Williams, Nathan B

  18. Proceedings of B Factories, the state of the art in accelerators, detectors and physics

    Energy Technology Data Exchange (ETDEWEB)

    Hitlin, D. (ed.) (California Inst. of Tech., Pasadena, CA (United States))

    1992-11-01

    The conference B Factories, The State of the Art in Accelerators, Detectors and Physics was held at Stanford Linear Accelerator Center on April 6-10, 1992. The guiding principle of the conference was to bring together accelerator physicists and high energy experimentalists and theorists at the same time, with the goal of encouraging communication in defining and solving problems in a way which cut across narrow areas of specialization. Thus the conference was, in large measure, two distinct conferences, one involving accelerator specialists, the other theorists and experimentalists. There were initial and closing plenary sessions, and three separate tracks of parallel sessions, called Accelerator, Detector/Physics and Joint Interest sessions. This report contains the papers of this conference, the general topics of these cover: vacuum system, lattice design, beam-beam interactions, rf systems, feedback systems, measuring instrumentation, the interaction region, radiation background, particle detectors, particle tracking and identification, data acquisition, and computing system, and particle theory.

  19. Accelerating Iterative Big Data Computing Through MPI

    Institute of Scientific and Technical Information of China (English)

    梁帆; 鲁小亿

    2015-01-01

    Current popular systems, Hadoop and Spark, cannot achieve satisfied performance because of the inefficient overlapping of computation and communication when running iterative big data applications. The pipeline of computing, data movement, and data management plays a key role for current distributed data computing systems. In this paper, we first analyze the overhead of shuffle operation in Hadoop and Spark when running PageRank workload, and then propose an event-driven pipeline and in-memory shuffle design with better overlapping of computation and communication as DataMPI-Iteration, an MPI-based library, for iterative big data computing. Our performance evaluation shows DataMPI-Iteration can achieve 9X∼21X speedup over Apache Hadoop, and 2X∼3X speedup over Apache Spark for PageRank and K-means.

  20. When does a physical system compute?

    Science.gov (United States)

    Horsman, Clare; Stepney, Susan; Wagner, Rob C; Kendon, Viv

    2014-09-08

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a 'computational entity', and its critical role in defining when computing is taking place in physical systems.

  1. Physics with post accelerated beams: nuclear astrophysics

    Science.gov (United States)

    Murphy, A. St J.

    2017-05-01

    In this article, recent studies so far conducted with post accelerated beams at the ISOLDE facility in the area of nuclear astrophysics are reviewed. Two experiments in particular are highlighted, that each feature novelty and innovation. Three future experiments are also briefly presented. Collectively, these works advance our understanding of big bang nucleosynthesis, quiescent and explosive burning in novae and x-ray bursts, and core-collapse supernovae, both in terms of the underlying explosion mechanism and gamma-ray satellite observable radioisotopes.

  2. The HL-LHC accelerator physics challenges

    CERN Document Server

    Fartoukh, S

    2014-01-01

    We review the conceptual baseline of the HL-LHC project, putting into perspective the main beam physics challenges of this new collider in comparison with the existing LHC, and the series of solutions and possible mitigation measures presently envisaged.

  3. Plasma physics via computer simulation

    CERN Document Server

    Birdsall, CK

    2004-01-01

    PART 1: PRIMER Why attempting to do plasma physics via computer simulation using particles makes good sense Overall view of a one dimensional electrostatic program A one dimensional electrostatic program ES1 Introduction to the numerical methods used Projects for ES1 A 1d electromagnetic program EM1 Projects for EM1 PART 2: THEORY Effects of the spatial grid Effects of the finitw time ste Energy-conserving simulation models Multipole models Kinetic theory for fluctuations and noise; collisions Kinetic properties: theory, experience and heuristic estimates PART 3: PRACTIC

  4. Particle acceleration, transport and turbulence in cosmic and heliospheric physics

    Science.gov (United States)

    Matthaeus, W.

    1992-01-01

    In this progress report, the long term goals, recent scientific progress, and organizational activities are described. The scientific focus of this annual report is in three areas: first, the physics of particle acceleration and transport, including heliospheric modulation and transport, shock acceleration and galactic propagation and reacceleration of cosmic rays; second, the development of theories of the interaction of turbulence and large scale plasma and magnetic field structures, as in winds and shocks; third, the elucidation of the nature of magnetohydrodynamic turbulence processes and the role such turbulence processes might play in heliospheric, galactic, cosmic ray physics, and other space physics applications.

  5. GPU-accelerated micromagnetic simulations using cloud computing

    Science.gov (United States)

    Jermain, C. L.; Rowlands, G. E.; Buhrman, R. A.; Ralph, D. C.

    2016-03-01

    Highly parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics.

  6. GPU-accelerated micromagnetic simulations using cloud computing

    CERN Document Server

    Jermain, C L; Buhrman, R A; Ralph, D C

    2015-01-01

    Highly-parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics.

  7. High School Physics and the Affordable Computer.

    Science.gov (United States)

    Harvey, Norman L.

    1978-01-01

    Explains how the computer was used in a high school physics course; Project Physics program and individualized study PSSC physics program. Evaluates the capabilities and limitations of a $600 microcomputer system. (GA)

  8. A survey of computational physics introductory computational science

    CERN Document Server

    Landau, Rubin H; Bordeianu, Cristian C

    2008-01-01

    Computational physics is a rapidly growing subfield of computational science, in large part because computers can solve previously intractable problems or simulate natural processes that do not have analytic solutions. The next step beyond Landau's First Course in Scientific Computing and a follow-up to Landau and Páez's Computational Physics, this text presents a broad survey of key topics in computational physics for advanced undergraduates and beginning graduate students, including new discussions of visualization tools, wavelet analysis, molecular dynamics, and computational fluid dynamics

  9. Computational Tools to Accelerate Commercial Development

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David C

    2013-01-01

    The goals of the work reported are: to develop new computational tools and models to enable industry to more rapidly develop and deploy new advanced energy technologies; to demonstrate the capabilities of the CCSI Toolset on non-proprietary case studies; and to deploy the CCSI Toolset to industry. Challenges of simulating carbon capture (and other) processes include: dealing with multiple scales (particle, device, and whole process scales); integration across scales; verification, validation, and uncertainty; and decision support. The tools cover: risk analysis and decision making; validated, high-fidelity CFD; high-resolution filtered sub-models; process design and optimization tools; advanced process control and dynamics; process models; basic data sub-models; and cross-cutting integration tools.

  10. Accelerating scientific computations with mixed precision algorithms

    Science.gov (United States)

    Baboulin, Marc; Buttari, Alfredo; Dongarra, Jack; Kurzak, Jakub; Langou, Julie; Langou, Julien; Luszczek, Piotr; Tomov, Stanimire

    2009-12-01

    On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented. Program summaryProgram title: ITER-REF Catalogue identifier: AECO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7211 No. of bytes in distributed program, including test data, etc.: 41 862 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: desktop, server Operating system: Unix/Linux RAM: 512 Mbytes Classification: 4.8 External routines: BLAS (optional) Nature of problem: On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. Solution method: Mixed precision algorithms stem from the observation that, in many cases, a single precision solution of a problem can be refined to the point where double precision accuracy is achieved. A common approach to the solution of linear systems, either dense or sparse, is to perform the LU

  11. The impact of the ISR on accelerator physics and technology

    CERN Document Server

    Bryant, P J

    2012-01-01

    The ISR (Intersecting Storage Rings) were two intersecting proton synchrotron rings each with a circumference of 942 m and eight-fold symmetry that were operational for 13 years from 1971 to 1984. The CERN PS injected 26 GeV/c proton beams into the two rings that could accelerate up to 31.4 GeV/c. The ISR worked for physics with beams of 30-40 A over 40-60 hours with luminosities in its superconducting low-{\\beta} insertion of 1031-1032 cm-2 s-1. The ISR demonstrated the practicality of collider beam physics while catalysing a rapid advance in accelerator technologies and techniques.

  12. High Energy Physics Experiments In Grid Computing Networks

    Directory of Open Access Journals (Sweden)

    Andrzej Olszewski

    2008-01-01

    Full Text Available The demand for computing resources used for detector simulations and data analysis in HighEnergy Physics (HEP experiments is constantly increasing due to the development of studiesof rare physics processes in particle interactions. The latest generation of experiments at thenewly built LHC accelerator at CERN in Geneva is planning to use computing networks fortheir data processing needs. A Worldwide LHC Computing Grid (WLCG organization hasbeen created to develop a Grid with properties matching the needs of these experiments. Inthis paper we present the use of Grid computing by HEP experiments and describe activitiesat the participating computing centers with the case of Academic Computing Center, ACKCyfronet AGH, Kraków, Poland.

  13. Handling and Transport of Oversized Accelerator Components and Physics Detectors

    CERN Document Server

    Prodon, S; Guinchard, M; Minginette, P

    2006-01-01

    For cost, planning and organisational reasons, it is often decided to install large pre-built accelerators components and physics detectors. As a result surface exceptional transports are required from the construction to the installation sites. Such heavy transports have been numerous during the LHC installation phase. This paper will describe the different types of transport techniques used to fit the particularities of accelerators and detectors components (weight, height, acceleration, planarity) as well as the measurement techniques for monitoring and the logistical aspects (organisation with the police, obstacles on the roads, etc). As far as oversized equipment is concerned, the lowering into the pit is challenging, as well as the transport in tunnel galleries in a very scare space and without handling means attached to the structure like overhead travelling cranes. From the PS accelerator to the LHC, handling systems have been developed at CERN to fit with these particular working conditions. This pap...

  14. CAS Accelerator Physics (High-Power Hadron Machines) in Spain

    CERN Multimedia

    CAS

    2011-01-01

    The CERN Accelerator School (CAS) and ESS-Bilbao jointly organised a specialised course on High-Power Hadron Machines, held at the Hotel Barceló Nervión in Bilbao, Spain, from 24 May to 2 June, 2011.   CERN Accelerator School students. After recapitulation lectures on the essentials of accelerator physics and review lectures on the different types of accelerators, the programme focussed on the challenges of designing and operating high-power facilities. The particular problems for RF systems, beam instrumentation, vacuum, cryogenics, collimators and beam dumps were examined. Activation of equipment, radioprotection and remote handling issues were also addressed. The school was very successful, with 69 participants of 22 nationalities. Feedback from the participants was extremely positive, praising the expertise and enthusiasm of the lecturers, as well as the high standard and excellent quality of their lectures. In addition to the academic programme, the participants w...

  15. Physics with post-accelerated beams at ISOLDE: nuclear reactions

    Science.gov (United States)

    Di Pietro, A.; Riisager, K.; Van Duppen, P.

    2017-04-01

    Nuclear-reaction studies have until now constituted a minor part of the physics program with post-accelerated beams at ISOLDE, mainly due to the maximum energy of REX-ISOLDE of around 3 MeV/u that limits reaction work to the mass region below A = 100. We give an overview of the current experimental status and of the physics results obtained so far. Finally, the improved conditions given by the HIE-ISOLDE upgrade are described.

  16. Computational Tools for Accelerating Carbon Capture Process Development

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David; Sahinidis, N V; Cozad, A; Lee, A; Kim, H; Morinelly, J; Eslick, J; Yuan, Z

    2013-06-04

    This presentation reports development of advanced computational tools to accelerate next generation technology development. These tools are to develop an optimized process using rigorous models. They include: Process Models; Simulation-Based Optimization; Optimized Process; Uncertainty Quantification; Algebraic Surrogate Models; and Superstructure Optimization (Determine Configuration).

  17. The computer-based control system of the NAC accelerator

    Science.gov (United States)

    Burdzik, G. F.; Bouckaert, R. F. A.; Cloete, I.; Dutoit, J. S.; Kohler, I. H.; Truter, J. N. J.; Visser, K.; Wikner, V. C. S. J.

    The National Accelerator Center (NAC) of the CSIR is building a two-stage accelerator which will provide charged-particle beams for use in medical and research applications. The control system for this accelerator is based on three mini-computers and a CAMAC interfacing network. Closed-loop control is being relegated to the various subsystems of the accelerators, and the computers and CAMAC network will be used in the first instance for data transfer, monitoring and servicing of the control consoles. The processing power of the computers will be utilized for automating start-up and beam-change procedures, for providing flexible and convenient information at the control consoles, for fault diagnosis and for beam-optimizing procedures. Tasks of a localized or dedicated nature are being off-loaded onto microcomputers, which are being used either in front-end devices or as slaves to the mini-computers. On the control consoles only a few instruments for setting and monitoring variables are being provided, but these instruments are universally-linkable to any appropriate machine variable.

  18. Physics codes on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Eltgroth, P.G.

    1985-12-04

    An effort is under way to develop physics codes which realize the potential of parallel machines. A new explicit algorithm for the computation of hydrodynamics has been developed which avoids global synchronization entirely. The approach, called the Independent Time Step Method (ITSM), allows each zone to advance at its own pace, determined by local information. The method, coded in FORTRAN, has demonstrated parallelism of greater than 20 on the Denelcor HEP machine. ITSM can also be used to replace current implicit treatments of problems involving diffusion and heat conduction. Four different approaches toward work distribution have been investigated and implemented for the one-dimensional code on the Denelcor HEP. They are ''self-scheduled'', an ASKFOR monitor, a ''queue of queues'' monitor, and a distributed ASKFOR monitor. The self-scheduled approach shows the lowest overhead but the poorest speedup. The distributed ASKFOR monitor shows the best speedup and the lowest execution times on the tested problems. 2 refs., 3 figs.

  19. Physical Computing and Its Scope--Towards a Constructionist Computer Science Curriculum with Physical Computing

    Science.gov (United States)

    Przybylla, Mareen; Romeike, Ralf

    2014-01-01

    Physical computing covers the design and realization of interactive objects and installations and allows students to develop concrete, tangible products of the real world, which arise from the learners' imagination. This can be used in computer science education to provide students with interesting and motivating access to the different topic…

  20. Accelerating Climate and Weather Simulations through Hybrid Computing

    Science.gov (United States)

    Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark

    2011-01-01

    Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.

  1. Hempel's dilemma and the physics of computation

    CERN Document Server

    Beenakker, C W J

    2007-01-01

    Carl Gustav Hempel (1905-1997) formulated the dilemma that carries his name in an attempt to determine the boundaries of physics. Where does physics go over into metaphysics? The purpose of this contribution is to indicate how a recently developed field of research, the physics of computation, might offer a new answer to that old question: The boundary between physics and metaphysics is the boundary between what can and what cannot be computed in the age of the universe.

  2. Accelerating Neuroimage Registration through Parallel Computation of Similarity Metric.

    Directory of Open Access Journals (Sweden)

    Yun-Gang Luo

    Full Text Available Neuroimage registration is crucial for brain morphometric analysis and treatment efficacy evaluation. However, existing advanced registration algorithms such as FLIRT and ANTs are not efficient enough for clinical use. In this paper, a GPU implementation of FLIRT with the correlation ratio (CR as the similarity metric and a GPU accelerated correlation coefficient (CC calculation for the symmetric diffeomorphic registration of ANTs have been developed. The comparison with their corresponding original tools shows that our accelerated algorithms can greatly outperform the original algorithm in terms of computational efficiency. This paper demonstrates the great potential of applying these registration tools in clinical applications.

  3. Accelerating Ab Initio Nuclear Physics Calculations with GPUs

    CERN Document Server

    Potter, Hugh; Maris, Pieter; Sosonkina, Masha; Vary, James; Binder, Sven; Calci, Angelo; Langhammer, Joachim; Roth, Robert; Çatalyürek, Ümit; Saule, Erik

    2014-01-01

    This paper describes some applications of GPU acceleration in ab initio nuclear structure calculations. Specifically, we discuss GPU acceleration of the software package MFDn, a parallel nuclear structure eigensolver. We modify the matrix construction stage to run partly on the GPU. On the Titan supercomputer at the Oak Ridge Leadership Computing Facility, this produces a speedup of approximately 2.2x - 2.7x for the matrix construction stage and 1.2x - 1.4x for the entire run.

  4. Quantum computing accelerator I/O : LDRD 52750 final report.

    Energy Technology Data Exchange (ETDEWEB)

    Schroeppel, Richard Crabtree; Modine, Normand Arthur; Ganti, Anand; Pierson, Lyndon George; Tigges, Christopher P.

    2003-12-01

    In a superposition of quantum states, a bit can be in both the states '0' and '1' at the same time. This feature of the quantum bit or qubit has no parallel in classical systems. Currently, quantum computers consisting of 4 to 7 qubits in a 'quantum computing register' have been built. Innovative algorithms suited to quantum computing are now beginning to emerge, applicable to sorting and cryptanalysis, and other applications. A framework for overcoming slightly inaccurate quantum gate interactions and for causing quantum states to survive interactions with surrounding environment is emerging, called quantum error correction. Thus there is the potential for rapid advances in this field. Although quantum information processing can be applied to secure communication links (quantum cryptography) and to crack conventional cryptosystems, the first few computing applications will likely involve a 'quantum computing accelerator' similar to a 'floating point arithmetic accelerator' interfaced to a conventional Von Neumann computer architecture. This research is to develop a roadmap for applying Sandia's capabilities to the solution of some of the problems associated with maintaining quantum information, and with getting data into and out of such a 'quantum computing accelerator'. We propose to focus this work on 'quantum I/O technologies' by applying quantum optics on semiconductor nanostructures to leverage Sandia's expertise in semiconductor microelectronic/photonic fabrication techniques, as well as its expertise in information theory, processing, and algorithms. The work will be guided by understanding of practical requirements of computing and communication architectures. This effort will incorporate ongoing collaboration between 9000, 6000 and 1000 and between junior and senior personnel. Follow-on work to fabricate and evaluate appropriate experimental nano/microstructures will be

  5. Multipactor Physics, Acceleration, and Breakdown in Dielectric-Loaded Accelerating Structures

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, Richard P. [Naval Research Lab., Washington, DC (United States); Gold, Steven H. [Naval Research Lab., Washington, DC (United States)

    2016-07-01

    The objective of this 3-year program is to study the physics issues associated with rf acceleration in dielectric-loaded accelerating (DLA) structures, with a focus on the key issue of multipactor loading, which has been found to cause very significant rf power loss in DLA structures whenever the rf pulsewidth exceeds the multipactor risetime (~10 ns). The experiments are carried out in the X-band magnicon laboratory at the Naval Research Laboratory (NRL) in collaboration with Argonne National Laboratory (ANL) and Euclid Techlabs LLC, who develop the test structures with support from the DoE SBIR program. There are two main elements in the research program: (1) high-power tests of DLA structures using the magnicon output (20 MW @11.4 GHz), and (2) tests of electron acceleration in DLA structures using relativistic electrons from a compact X-band accelerator. The work during this period has focused on a study of the use of an axial magnetic field to suppress multipactor in DLA structures, with several new high power tests carried out at NRL, and on preparation of the accelerator for the electron acceleration experiments.

  6. High-Productivity Computing in Computational Physics Education

    Science.gov (United States)

    Tel-Zur, Guy

    2011-03-01

    We describe the development of a new course in Computational Physics at the Ben-Gurion University. This elective course for 3rd year undergraduates and MSc. students is being taught during one semester. Computational Physics is by now well accepted as the Third Pillar of Science. This paper's claim is that modern Computational Physics education should deal also with High-Productivity Computing. The traditional approach of teaching Computational Physics emphasizes ``Correctness'' and then ``Accuracy'' and we add also ``Performance.'' Along with topics in Mathematical Methods and case studies in Physics the course deals a significant amount of time with ``Mini-Courses'' in topics such as: High-Throughput Computing - Condor, Parallel Programming - MPI and OpenMP, How to build a Beowulf, Visualization and Grid and Cloud Computing. The course does not intend to teach neither new physics nor new mathematics but it is focused on an integrated approach for solving problems starting from the physics problem, the corresponding mathematical solution, the numerical scheme, writing an efficient computer code and finally analysis and visualization.

  7. GPU in Physics Computation: Case Geant4 Navigation

    CERN Document Server

    Seiskari, Otto; Niemi, Tapio

    2012-01-01

    General purpose computing on graphic processing units (GPU) is a potential method of speeding up scientific computation with low cost and high energy efficiency. We experimented with the particle physics simulation toolkit Geant4 used at CERN to benchmark its geometry navigation functionality on a GPU. The goal was to find out whether Geant4 physics simulations could benefit from GPU acceleration and how difficult it is to modify Geant4 code to run in a GPU. We ported selected parts of Geant4 code to C99 & CUDA and implemented a simple gamma physics simulation utilizing this code to measure efficiency. The performance of the program was tested by running it on two different platforms: NVIDIA GeForce 470 GTX GPU and a 12-core AMD CPU system. Our conclusion was that GPUs can be a competitive alternate for multi-core computers but porting existing software in an efficient way is challenging.

  8. High performance computing for beam physics applications

    Science.gov (United States)

    Ryne, R. D.; Habib, S.

    Several countries are now involved in efforts aimed at utilizing accelerator-driven technologies to solve problems of national and international importance. These technologies have both economic and environmental implications. The technologies include waste transmutation, plutonium conversion, neutron production for materials science and biological science research, neutron production for fusion materials testing, fission energy production systems, and tritium production. All of these projects require a high-intensity linear accelerator that operates with extremely low beam loss. This presents a formidable computational challenge: One must design and optimize over a kilometer of complex accelerating structures while taking into account beam loss to an accuracy of 10 parts per billion per meter. Such modeling is essential if one is to have confidence that the accelerator will meet its beam loss requirement, which ultimately affects system reliability, safety and cost. At Los Alamos, the authors are developing a capability to model ultra-low loss accelerators using the CM-5 at the Advanced Computing Laboratory. They are developing PIC, Vlasov/Poisson, and Langevin/Fokker-Planck codes for this purpose. With slight modification, they have also applied their codes to modeling mesoscopic systems and astrophysical systems. In this paper, they will first describe HPC activities in the accelerator community. Then they will discuss the tools they have developed to model classical and quantum evolution equations. Lastly they will describe how these tools have been used to study beam halo in high current, mismatched charged particle beams.

  9. Statistical and thermal physics with computer applications

    CERN Document Server

    Gould, Harvey

    2010-01-01

    This textbook carefully develops the main ideas and techniques of statistical and thermal physics and is intended for upper-level undergraduate courses. The authors each have more than thirty years' experience in teaching, curriculum development, and research in statistical and computational physics. Statistical and Thermal Physics begins with a qualitative discussion of the relation between the macroscopic and microscopic worlds and incorporates computer simulations throughout the book to provide concrete examples of important conceptual ideas. Unlike many contemporary texts on the

  10. Accelerator physics in ERL based polarized electron ion collider

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Yue [Brookhaven National Lab. (BNL), Upton, NY (United States). Collider-Accelerator Dept.

    2015-05-03

    This talk will present the current accelerator physics challenges and solutions in designing ERL-based polarized electron-hadron colliders, and illustrate them with examples from eRHIC and LHeC designs. These challenges include multi-pass ERL design, highly HOM-damped SRF linacs, cost effective FFAG arcs, suppression of kink instability due to beam-beam effect, and control of ion accumulation and fast ion instabilities.

  11. Search for New Physics in reactor and accelerator experiments

    Science.gov (United States)

    Di Iura, A.; Girardi, I.; Meloni, D.

    2016-01-01

    We consider two scenarios of New Physics: the Large Extra Dimensions (LED), where sterile neutrinos can propagate in a (4+d) -dimensional space-time, and the Non Standard Interactions (NSI), where the neutrino interactions with ordinary matter are parametrized at low energy in terms of effective flavour-dependent complex couplings \\varepsilon_{αβ} . We study how these models have an impact on oscillation parameters in reactor and accelerator experiments.

  12. Extreme Physics and Informational/Computational Limits

    Energy Technology Data Exchange (ETDEWEB)

    Di Sia, Paolo, E-mail: paolo.disia@univr.it, E-mail: 10alla33@virgilio.it [Department of Computer Science, Faculty of Science, Verona University, Strada Le Grazie 15, I-37134 Verona (Italy) and Faculty of Computer Science, Free University of Bozen, Piazza Domenicani 3, I-39100 Bozen-Bolzano (Italy)

    2011-07-08

    A sector of the current theoretical physics, even called 'extreme physics', deals with topics concerning superstring theories, multiverse, quantum teleportation, negative energy, and more, that only few years ago were considered scientific imaginations or purely speculative physics. Present experimental lines of evidence and implications of cosmological observations seem on the contrary support such theories. These new physical developments lead to informational limits, as the quantity of information, that a physical system can record, and computational limits, resulting from considerations regarding black holes and space-time fluctuations. In this paper I consider important limits for information and computation resulting in particular from string theories and its foundations.

  13. Computational modeling of high pressure combustion mechanism in scram accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Choi, J.Y. [Pusan Nat. Univ. (Korea); Lee, B.J. [Pusan Nat. Univ. (Korea); Agency for Defense Development, Taejon (Korea); Jeung, I.S. [Pusan Nat. Univ. (Korea); Seoul National Univ. (Korea). Dept. of Aerospace Engineering

    2000-11-01

    A computational study was carried out to analyze a high-pressure combustion in scram accelerator. Fluid dynamic modeling was based on RANS equations for reactive flows, which were solved in a fully coupled manner using a fully implicit-upwind TVD scheme. For the accurate simulation of high-pressure combustion in ram accelerator, 9-species, 25-step fully detailed reaction mechanism was incorporated with the existing CFD code previously used for the ram accelerator studies. The mechanism is based on GRI-Mech. 2.11 that includes pressure-dependent reaction rate formulations indispensable for the correct prediction of induction time in high-pressure environment. A real gas equation of state was also included to account for molecular interactions and real gas effects of high-pressure gases. The present combustion modeling is compared with previous 8-step and 19-step mechanisms with ideal gas assumption. The result shows that mixture ignition characteristics are very sensitive to the combustion mechanisms, and different mechanism results in different reactive flow-field characteristics that have a significant relevance to the operation mode and the performance of scram accelerator. (orig.)

  14. International Conference on Theoretical and Computational Physics

    CERN Document Server

    2016-01-01

    Int'l Conference on Theoretical and Computational Physics (TCP 2016) will be held from August 24 to 26, 2016 in Xi'an, China. This Conference will cover issues on Theoretical and Computational Physics. It dedicates to creating a stage for exchanging the latest research results and sharing the advanced research methods. TCP 2016 will be an important platform for inspiring international and interdisciplinary exchange at the forefront of Theoretical and Computational Physics. The Conference will bring together researchers, engineers, technicians and academicians from all over the world, and we cordially invite you to take this opportunity to join us for academic exchange and visit the ancient city of Xi’an.

  15. Computational Physics as a Path for Physics Education

    Science.gov (United States)

    Landau, Rubin H.

    2008-04-01

    Evidence and arguments will be presented that modifications in the undergraduate physics curriculum are necessary to maintain the long-term relevance of physics. Suggested will a balance of analytic, experimental, computational, and communication skills, that in many cases will require an increased inclusion of computation and its associated skill set into the undergraduate physics curriculum. The general arguments will be followed by a detailed enumeration of suggested subjects and student learning outcomes, many of which have already been adopted or advocated by the computational science community, and which permit high performance computing and communication. Several alternative models for how these computational topics can be incorporated into the undergraduate curriculum will be discussed. This includes enhanced topics in the standard existing courses, as well as stand-alone courses. Applications and demonstrations will be presented throughout the talk, as well as prototype video-based materials and electronic books.

  16. Fast acceleration of 2D wave propagation simulations using modern computational accelerators.

    Science.gov (United States)

    Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H; Kay, Matthew

    2014-01-01

    Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than 150x speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least 200x faster than the sequential implementation and 30x faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of 120x with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of

  17. Fast acceleration of 2D wave propagation simulations using modern computational accelerators.

    Directory of Open Access Journals (Sweden)

    Wei Wang

    Full Text Available Recent developments in modern computational accelerators like Graphics Processing Units (GPUs and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than 150x speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least 200x faster than the sequential implementation and 30x faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of 120x with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other

  18. A Staged Muon Accelerator Facility For Neutrino and Collider Physics

    CERN Document Server

    Delahaye, Jean-Pierre; Brice, Stephen; Bross, Alan David; Denisov, Dmitri; Eichten, Estia; Holmes, Stephen; Lipton, Ronald; Neuffer, David; Palmer, Mark Alan; Bogacz, S Alex; Huber, Patrick; Kaplan, Daniel M; Snopok, Pavel; Kirk, Harold G; Palmer, Robert B; Ryne, Robert D

    2015-01-01

    Muon-based facilities offer unique potential to provide capabilities at both the Intensity Frontier with Neutrino Factories and the Energy Frontier with Muon Colliders. They rely on a novel technology with challenging parameters, for which the feasibility is currently being evaluated by the Muon Accelerator Program (MAP). A realistic scenario for a complementary series of staged facilities with increasing complexity and significant physics potential at each stage has been developed. It takes advantage of and leverages the capabilities already planned for Fermilab, especially the strategy for long-term improvement of the accelerator complex being initiated with the Proton Improvement Plan (PIP-II) and the Long Baseline Neutrino Facility (LBNF). Each stage is designed to provide an R&D platform to validate the technologies required for subsequent stages. The rationale and sequence of the staging process and the critical issues to be addressed at each stage, are presented.

  19. CAS course on Advanced Accelerator Physics in Warsaw

    CERN Multimedia

    CERN Accelerator School

    2015-01-01

    The CERN Accelerator School (CAS) and the National Centre for Nuclear Research (NCBJ) recently organised a course on Advanced Accelerator Physics. The course was held in Warsaw, Poland from 27 September to 9 October 2015.    The course followed an established format with lectures in the mornings and practical courses in the afternoons. The lecture programme consisted of 34 lectures, supplemented by private study, tutorials and seminars. The practical courses provided ‘hands-on’ experience of three topics: ‘Beam Instrumentation and Diagnostics’, ‘RF Measurement Techniques’ and ‘Optics Design and Corrections’. Participants selected one of the three courses and followed their chosen topic throughout the duration of the school. Sixty-six students representing 18 nationalities attended this course, with most participants coming from European counties, but also from South Korea, Taiwan and Russia. Feedback from th...

  20. CAS course on advanced accelerator physics in Trondheim, Norway

    CERN Multimedia

    CERN Accelerator School

    2013-01-01

    The CERN Accelerator School (CAS) and the Norwegian University of Science and Technology (NTNU) recently organised a course on advanced accelerator physics. The course was held in Trondheim, Norway, from 18 to 29 August 2013. Accommodation and lectures were at the Hotel Britannia and practical courses were held at the university.   The course's format included lectures in the mornings and practical courses in the afternoons. The lecture programme consisted of 32 lectures supplemented by discussion sessions, private study and tutorials. The practical courses provided "hands-on" experience in three topics: RF measurement techniques, beam instrumentation and diagnostics, and optics design and corrections. Participants selected one of the three courses and followed the chosen topic throughout the course. The programme concluded with seminars and a poster session.  70 students representing 21 nationalities were selected from over 90 applicants, with most participa...

  1. Accelerator-based techniques for the support of senior-level undergraduate physics laboratories

    Science.gov (United States)

    Williams, J. R.; Clark, J. C.; Isaacs-Smith, T.

    2001-07-01

    Approximately three years ago, Auburn University replaced its aging Dynamitron accelerator with a new 2MV tandem machine (Pelletron) manufactured by the National Electrostatics Corporation (NEC). This new machine is maintained and operated for the University by Physics Department personnel, and the accelerator supports a wide variety of materials modification/analysis studies. Computer software is available that allows the NEC Pelletron to be operated from a remote location, and an Internet link has been established between the Accelerator Laboratory and the Upper-Level Undergraduate Teaching Laboratory in the Physics Department. Additional software supplied by Canberra Industries has also been used to create a second Internet link that allows live-time data acquisition in the Teaching Laboratory. Our senior-level undergraduates and first-year graduate students perform a number of experiments related to radiation detection and measurement as well as several standard accelerator-based experiments that have been added recently. These laboratory exercises will be described, and the procedures used to establish the Internet links between our Teaching Laboratory and the Accelerator Laboratory will be discussed.

  2. Computing for Heavy Ion Physics

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, G.; Schiff, D.; Hristov, P.; Menaud, J.M.; Hrivnacova, I.; Poizat, P.; Chabratova, G.; Albin-Amiot, H.; Carminati, F.; Peters, A.; Schutz, Y.; Safarik, K.; Ollitrault, J.Y.; Hrivnacova, I.; Morsch, A.; Gheata, A.; Morsch, A.; Vande Vyvre, P.; Lauret, J.; Nief, J.Y.; Pereira, H.; Kaczmarek, O.; Conesa Del Valle, Z.; Guernane, R.; Stocco, D.; Gruwe, M.; Betev, L.; Baldisseri, A.; Vilakazi, Z.; Rapp, B.; Masoni, A.; Stoicea, G.; Brun, R

    2005-07-01

    This workshop was devoted to the computational technologies needed for the heavy quarkonia and open flavor production study at LHC (large hadron collider) experiments. These requirements are huge: peta-bytes of data will be generated each year. Analysing this will require the equivalent of a few thousands of today's fastest PC processors. The new developments in terms of dedicated software has been addressed. This document gathers the transparencies that were presented at the workshop.

  3. Accelerating MATLAB with GPU computing a primer with examples

    CERN Document Server

    Suh, Jung W

    2013-01-01

    Beyond simulation and algorithm development, many developers increasingly use MATLAB even for product deployment in computationally heavy fields. This often demands that MATLAB codes run faster by leveraging the distributed parallelism of Graphics Processing Units (GPUs). While MATLAB successfully provides high-level functions as a simulation tool for rapid prototyping, the underlying details and knowledge needed for utilizing GPUs make MATLAB users hesitate to step into it. Accelerating MATLAB with GPUs offers a primer on bridging this gap. Starting with the basics, setting up MATLAB for

  4. The Computational Physics Program of the national MFE Computer Center

    Energy Technology Data Exchange (ETDEWEB)

    Mirin, A.A.

    1989-01-01

    Since June 1974, the MFE Computer Center has been engaged in a significant computational physics effort. The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generations of supercomputers. The Computational Physics Group has been involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to tokamaks and compact toroids. A third area is the investigation of kinetic instabilities using a 3-D particle code; this work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence have been under examination, with the hope of being able to explain anomalous transport. Also, we are collaborating in an international effort to evaluate fully three-dimensional linear stability of toroidal devices. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers. A summary of these programs are included in this paper. 6 tabs.

  5. Proceedings of the workshop on B physics at hadron accelerators

    Energy Technology Data Exchange (ETDEWEB)

    McBride, P. [Superconducting Super Collider Lab., Dallas, TX (United States); Mishra, C.S. [Fermi National Accelerator Lab., Batavia, IL (United States)] [eds.

    1993-12-31

    This report contains papers on the following topics: Measurement of Angle {alpha}; Measurement of Angle {beta}; Measurement of Angle {gamma}; Other B Physics; Theory of Heavy Flavors; Charged Particle Tracking and Vertexing; e and {gamma} Detection; Muon Detection; Hadron ID; Electronics, DAQ, and Computing; and Machine Detector Interface. Selected papers have been indexed separately for inclusion the in Energy Science and Technology Database.

  6. Operational radiation protection in high-energy physics accelerators.

    Science.gov (United States)

    Rokni, S H; Fassò, A; Liu, J C

    2009-11-01

    An overview of operational radiation protection (RP) policies and practices at high-energy electron and proton accelerators used for physics research is presented. The different radiation fields and hazards typical of these facilities are described, as well as access control and radiation control systems. The implementation of an operational RP programme is illustrated, covering area and personnel classification and monitoring, radiation surveys, radiological environmental protection, management of induced radioactivity, radiological work planning and control, management of radioactive materials and wastes, facility dismantling and decommissioning, instrumentation and training.

  7. On-Chip Reconfigurable Hardware Accelerators for Popcount Computations

    Directory of Open Access Journals (Sweden)

    Valery Sklyarov

    2016-01-01

    Full Text Available Popcount computations are widely used in such areas as combinatorial search, data processing, statistical analysis, and bio- and chemical informatics. In many practical problems the size of initial data is very large and increase in throughput is important. The paper suggests two types of hardware accelerators that are (1 designed in FPGAs and (2 implemented in Zynq-7000 all programmable systems-on-chip with partitioning of algorithms that use popcounts between software of ARM Cortex-A9 processing system and advanced programmable logic. A three-level system architecture that includes a general-purpose computer, the problem-specific ARM, and reconfigurable hardware is then proposed. The results of experiments and comparisons with existing benchmarks demonstrate that although throughput of popcount computations is increased in FPGA-based designs interacting with general-purpose computers, communication overheads (in experiments with PCI express are significant and actual advantages can be gained if not only popcount but also other types of relevant computations are implemented in hardware. The comparison of software/hardware designs for Zynq-7000 all programmable systems-on-chip with pure software implementations in the same Zynq-7000 devices demonstrates increase in performance by a factor ranging from 5 to 19 (taking into account all the involved communication overheads between the programmable logic and the processing systems.

  8. Computational physics problem solving with Python

    CERN Document Server

    Landau, Rubin H; Bordeianu, Cristian C

    2015-01-01

    The use of computation and simulation has become an essential part of the scientific process. Being able to transform a theory into an algorithm requires significant theoretical insight, detailed physical and mathematical understanding, and a working level of competency in programming. This upper-division text provides an unusually broad survey of the topics of modern computational physics from a multidisciplinary, computational science point of view. Its philosophy is rooted in learning by doing (assisted by many model programs), with new scientific materials as well as with the Python progr

  9. Distance Computation Between Non-Holonomic Motions with Constant Accelerations

    Directory of Open Access Journals (Sweden)

    Enrique J. Bernabeu

    2013-09-01

    Full Text Available A method for computing the distance between two moving robots or between a mobile robot and a dynamic obstacle with linear or arc‐like motions and with constant accelerations is presented in this paper. This distance is obtained without stepping or discretizing the motions of the robots or obstacles. The robots and obstacles are modelled by convex hulls. This technique obtains the future instant in time when two moving objects will be at their minimum translational distance ‐ i.e., at their minimum separation or maximum penetration (if they will collide. This distance and the future instant in time are computed in parallel. This method is intended to be run each time new information from the world is received and, consequently, it can be used for generating collision‐free trajectories for non‐holonomic mobile robots.

  10. European Strategy for Accelerator-Based Neutrino Physics

    CERN Document Server

    Bertolucci, Sergio; Cervera, Anselmo; Donini, Andrea; Dracos, Marcos; Duchesneau, Dominique; Dufour, Fanny; Edgecock, Rob; Efthymiopoulos, Ilias; Gschwendtner, Edda; Kudenko, Yury; Long, Ken; Maalampi, Jukka; Mezzetto, Mauro; Pascoli, Silvia; Palladino, Vittorio; Rondio, Ewa; Rubbia, Andre; Rubbia, Carlo; Stahl, Achim; Stanco, Luca; Thomas, Jenny; Wark, David; Wildner, Elena; Zito, Marco

    2012-01-01

    Massive neutrinos reveal physics beyond the Standard Model, which could have deep consequences for our understanding of the Universe. Their study should therefore receive the highest level of priority in the European Strategy. The discovery and study of leptonic CP violation and precision studies of the transitions between neutrino flavours require high intensity, high precision, long baseline accelerator neutrino experiments. The community of European neutrino physicists involved in oscillation experiments is strong enough to support a major neutrino long baseline project in Europe, and has an ambitious, competitive and coherent vision to propose. Following the 2006 European Strategy for Particle Physics (ESPP) recommendations, two complementary design studies have been carried out: LAGUNA/LBNO, focused on deep underground detector sites, and EUROnu, focused on high intensity neutrino facilities. LAGUNA LBNO recommends, as first step, a conventional neutrino beam CN2PY from a CERN SPS North Area Neutrino Fac...

  11. The DOE Accelerated Strategic Computing Initiative: Challenges and opportunities for predictive materials simulation capabilities

    Science.gov (United States)

    Mailhiot, Christian

    1998-05-01

    In response to the unprecedented national security challenges emerging from the end of nuclear testing, the Defense Programs of the Department of Energy has developed a long-term strategic plan based on a vigorous Science-Based Stockpile Stewardship (SBSS) program. The main objective of the SBSS program is to ensure confidence in the performance, safety, and reliability of the stockpile on the basis of a fundamental science-based approach. A central element of this approach is the development of predictive, ‘full-physics’, full-scale computer simulation tools. As a critical component of the SBSS program, the Accelerated Strategic Computing Initiative (ASCI) was established to provide the required advances in computer platforms and to enable predictive, physics-based simulation capabilities. In order to achieve the ASCI goals, fundamental problems in the fields of computer and physical sciences of great significance to the entire scientific community must be successfully solved. Foremost among the key elements needed to develop predictive simulation capabilities, the development of improved physics-based materials models is a cornerstone. We indicate some of the materials theory, modeling, and simulation challenges and illustrate how the ASCI program will enable both the hardware and the software tools necessary to advance the state-of-the-art in the field of computational condensed matter and materials physics.

  12. Delivering Insight The History of the Accelerated Strategic Computing Initiative

    Energy Technology Data Exchange (ETDEWEB)

    Larzelere II, A R

    2007-01-03

    The history of the Accelerated Strategic Computing Initiative (ASCI) tells of the development of computational simulation into a third fundamental piece of the scientific method, on a par with theory and experiment. ASCI did not invent the idea, nor was it alone in bringing it to fruition. But ASCI provided the wherewithal - hardware, software, environment, funding, and, most of all, the urgency - that made it happen. On October 1, 2005, the Initiative completed its tenth year of funding. The advances made by ASCI over its first decade are truly incredible. Lawrence Livermore, Los Alamos, and Sandia National Laboratories, along with leadership provided by the Department of Energy's Defense Programs Headquarters, fundamentally changed computational simulation and how it is used to enable scientific insight. To do this, astounding advances were made in simulation applications, computing platforms, and user environments. ASCI dramatically changed existing - and forged new - relationships, both among the Laboratories and with outside partners. By its tenth anniversary, despite daunting challenges, ASCI had accomplished all of the major goals set at its beginning. The history of ASCI is about the vision, leadership, endurance, and partnerships that made these advances possible.

  13. 18th International Workshop on Advanced Computing and Analysis Techniques in Physics Research

    CERN Document Server

    2017-01-01

    The 18th edition of ACAT will bring together experts to explore and confront the boundaries of computing, automated data analysis, and theoretical calculation technologies, in particle and nuclear physics, astronomy and astrophysics, cosmology, accelerator science and beyond. ACAT provides a unique forum where these disciplines overlap with computer science, allowing for the exchange of ideas and the discussion of cutting-edge computing, data analysis and theoretical calculation technologies in fundamental physics research.

  14. Developing The Physics Desing for NDCS-II, A Unique Pulse-Compressing Ion Accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Friedman, A; Barnard, J J; Cohen, R H; Grote, D P; Lund, S M; Sharp, W M; Faltens, A; Henestroza, E; Jung, J; Kwan, J W; Lee, E P; Leitner, M A; Logan, B G; Vay, J -; Waldron, W L; Davidson, R C; Dorf, M; Gilson, E P; Kaganovich, I

    2009-09-24

    The Heavy Ion Fusion Science Virtual National Laboratory (a collaboration of LBNL, LLNL, and PPPL) is using intense ion beams to heat thin foils to the 'warm dense matter' regime at {approx}< 1 eV, and is developing capabilities for studying target physics relevant to ion-driven inertial fusion energy. The need for rapid target heating led to the development of plasma-neutralized pulse compression, with current amplification factors exceeding 50 now routine on the Neutralized Drift Compression Experiment (NDCX). Construction of an improved platform, NDCX-II, has begun at LBNL with planned completion in 2012. Using refurbished induction cells from the Advanced Test Accelerator at LLNL, NDCX-II will compress a {approx}500 ns pulse of Li{sup +} ions to {approx} 1 ns while accelerating it to 3-4 MeV over {approx} 15 m. Strong space charge forces are incorporated into the machine design at a fundamental level. We are using analysis, an interactive 1D PIC code (ASP) with optimizing capabilities and centroid tracking, and multi-dimensional Warpcode PIC simulations, to develop the NDCX-II accelerator. This paper describes the computational models employed, and the resulting physics design for the accelerator.

  15. Nanophotonic information physics nanointelligence and nanophotonic computing

    CERN Document Server

    2014-01-01

    This book provides a new direction in the field of nano-optics and nanophotonics from information and computing-related sciences and technology. Entitled by "Information Physics and Computing in NanosScale Photonics and Materials”, IPCN in short, the book aims to bring together recent progresses in the intersection of nano-scale photonics, information, and enabling technologies. The topic will include (1) an overview of information physics in nanophotonics, (2) DNA self-assembled nanophotonic systems, (3) Functional molecular sensing, (4) Smart fold computing, an architecture for nanophotonics, (5) semiconductor nanowire and its photonic applications, (6) single photoelectron manipulation in imaging sensors, (6) hierarchical nanophotonic systems, (8) photonic neuromorphic computing, and (9) SAT solver and decision making based on nanophotonics.

  16. CAS Introduction to Accelerator Physics in the Czech Republic

    CERN Multimedia

    CERN Accelerator School

    2014-01-01

    The CERN Accelerator School (CAS) and the Czech Technical University in Prague jointly organised the Introduction to Accelerator Physics course in Prague, Czech Republic from 31 August to 12 September 2014.   The course was held in the Hotel Don Giovanni on the outskirts of the city, and was attended by 111 participants of 29 nationalities, from countries as far away as Armenia, Argentina, Canada, Iceland, Thailand and Russia. The intensive programme comprised 41 lectures, 3 seminars, 4 tutorials and 6 hours of guided and private study. A poster session and a 1-minute/1-slide session were also included in the programme, where the students were able to present their work. Feedback from the students was very positive, praising the expertise of the lecturers, as well as the high standard and quality of their lectures. During the second week, the afternoon lectures were held in the Czech Technical University in Prague. In addition to the academic programme, the students had the opportunity to vis...

  17. Computational algorithms for multiphase magnetohydrodynamics and applications to accelerator targets

    Directory of Open Access Journals (Sweden)

    R.V. Samulyak

    2010-01-01

    Full Text Available An interface-tracking numerical algorithm for the simulation of magnetohydrodynamic multiphase/free surface flows in the low-magnetic-Reynolds-number approximation of (Samulyak R., Du J., Glimm J., Xu Z., J. Comp. Phys., 2007, 226, 1532 is described. The algorithm has been implemented in multi-physics code FronTier and used for the simulation of MHD processes in liquids and weakly ionized plasmas. In this paper, numerical simulations of a liquid mercury jet entering strong and nonuniform magnetic field and interacting with a powerful proton pulse have been performed and compared with experiments. Such a mercury jet is a prototype of the proposed Muon Collider/Neutrino Factory, a future particle accelerator. Simulations demonstrate the elliptic distortion of the mercury jet as it enters the magnetic solenoid at a small angle to the magnetic axis, jet-surface instabilities (filamentation induced by the interaction with proton pulses, and the stabilizing effect of the magnetic field.

  18. Physics, Computer Science and Mathematics Division annual report, 1 January--31 December 1975. [LBL

    Energy Technology Data Exchange (ETDEWEB)

    Lepore, J.L. (ed.)

    1975-01-01

    This annual report describes the scientific research and other work carried out during the calendar year 1975. The report is nontechnical in nature, with almost no data. A 17-page bibliography lists the technical papers which detail the work. The contents of the report include the following: experimental physics (high-energy physics--SPEAR, PEP, SLAC, FNAL, BNL, Bevatron; particle data group; medium-energy physics; astrophysics, astronomy, and cosmic rays; instrumentation development), theoretical physics (particle theory and accelerator theory and design), computer science and applied mathematics (data management systems, socio-economic environment demographic information system, computer graphics, computer networks, management information systems, computational physics and data analysis, mathematical modeling, programing languages, applied mathematics research), real-time systems (ModComp and PDP networks), and computer center activities (systems programing, user services, hardware development, computer operations). A glossary of computer science and mathematics terms is also included. 32 figures. (RWR)

  19. A complexity view into the physics of precursory accelerating seismicity.

    Science.gov (United States)

    Vallianatos, Filippos; Chatzopoulos, George

    2017-04-01

    Strong observational indications support the hypothesis that many large earthquakes are preceded by accelerating seismic release rates which described by a power law time to failure relation. In the present work, a unified theoretical framework is discussed based on the ideas of non-extensive statistical physics along with fundamental principles of physics such as the energy conservation in a faulted crustal volume undergoing stress loading. We derive the time-to-failure power-law of cumulative energy released in a fault system that obeys a hierarchical distribution law extracted from Tsallis entropy. Considering the analytic conditions near the time of failure, we derive from first principles the time-to-failure power-law and show that a common critical exponent m(q) exists, which is a function of the non-extensive entropic parameter q. We conclude that the cumulative precursory parameters are function of the energy supplied to the system and the size of the precursory volume. In addition the q-exponential distribution which describes the fault system is a crucial factor on the appearance of power-law acceleration in the seismicity. Our results based on Tsallis entropy and the energy conservation gives a new view on the empirical laws derived. References Vallianatos F., Papadakis G., Michas G., 2016. Generalized statistical mechanics approaches to earthquakes and tectonics. Proc. R. Soc. A, 472, 20160497. Tzanis A. and Vallianatos F., 2003. Distributed power-law seismicity changes and crustal deformation in the EW Hellenic Arc. Natural Hazards and Earth Systems Sciences, 3, 179-195.

  20. Computer simulation in physics and engineering

    CERN Document Server

    Steinhauser, Martin Oliver

    2013-01-01

    This work is a needed reference for widely used techniques and methods of computer simulation in physics and other disciplines, such as materials science. The work conveys both: the theoretical foundations of computer simulation as well as applications and "tricks of the trade", that often are scattered across various papers. Thus it will meet a need and fill a gap for every scientist who needs computer simulations for his/her task at hand. In addition to being a reference, case studies and exercises for use as course reading are included.

  1. Accelerating Development of EV Batteries Through Computer-Aided Engineering (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Pesaran, A.; Kim, G. H.; Smith, K.; Santhanagopalan, S.

    2012-12-01

    The Department of Energy's Vehicle Technology Program has launched the Computer-Aided Engineering for Automotive Batteries (CAEBAT) project to work with national labs, industry and software venders to develop sophisticated software. As coordinator, NREL has teamed with a number of companies to help improve and accelerate battery design and production. This presentation provides an overview of CAEBAT, including its predictive computer simulation of Li-ion batteries known as the Multi-Scale Multi-Dimensional (MSMD) model framework. MSMD's modular, flexible architecture connects the physics of battery charge/discharge processes, thermal control, safety and reliability in a computationally efficient manner. This allows independent development of submodels at the cell and pack levels.

  2. Computer simulation of 2-D and 3-D ion beam extraction and acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Ido, Shunji; Nakajima, Yuji [Saitama Univ., Urawa (Japan). Faculty of Engineering

    1997-03-01

    The two-dimensional code and the three-dimensional code have been developed to study the physical features of the ion beams in the extraction and acceleration stages. By using the two-dimensional code, the design of first electrode(plasma grid) is examined in regard to the beam divergence. In the computational studies by using the three-dimensional code, the axis-off model of ion beam is investigated. It is found that the deflection angle of ion beam is proportional to the gap displacement of the electrodes. (author)

  3. Quantum algorithms for computational nuclear physics

    Directory of Open Access Journals (Sweden)

    Višňák Jakub

    2015-01-01

    Full Text Available While quantum algorithms have been studied as an efficient tool for the stationary state energy determination in the case of molecular quantum systems, no similar study for analogical problems in computational nuclear physics (computation of energy levels of nuclei from empirical nucleon-nucleon or quark-quark potentials have been realized yet. Although the difference between the above mentioned studies might seem negligible, it will be examined. First steps towards a particular simulation (on classical computer of the Iterative Phase Estimation Algorithm for deuterium and tritium nuclei energy level computation will be carried out with the aim to prove algorithm feasibility (and extensibility to heavier nuclei for its possible practical realization on a real quantum computer.

  4. Statistical and computational challenges in physical mapping

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, D.O.; Speed, T.P.

    1994-06-01

    One of the great success stories of modern molecular genetics has been the ability of biologists to isolate and characterize the genes responsible for serious inherited diseases like Huntington`s disease, cystic fibrosis, and myotonic dystrophy. Instrumental in these efforts has been the construction of so-called {open_quotes}physical maps{close_quotes} of large regions of human chromosomes. Constructing a physical map of a chromosome presents a number of interesting challenges to the computational statistician. In addition to the general ill-posedness of the problem, complications include the size of the data sets, computational complexity, and the pervasiveness of experimental error. The nature of the problem and the presence of many levels of experimental uncertainty make statistical approaches to map construction appealing. Simultaneously, however, the size and combinatorial complexity of the problem make such approaches computationally demanding. In this paper we discuss what physical maps are and describe three different kinds of physical maps, outlining issues which arise in constructing them. In addition, we describe our experience with powerful, interactive statistical computing environments. We found that the ability to create high-level specifications of proposed algorithms which could then be directly executed provided a flexible rapid prototyping facility for developing new statistical models and methods. The ability to check the implementation of an algorithm by comparing its results to that of an executable specification enabled us to rapidly debug both specification and implementation in an environment of changing needs.

  5. Computer Algebra Recipes for Mathematical Physics

    CERN Document Server

    Enns, Richard H

    2005-01-01

    Over two hundred novel and innovative computer algebra worksheets or "recipes" will enable readers in engineering, physics, and mathematics to easily and rapidly solve and explore most problems they encounter in their mathematical physics studies. While the aim of this text is to illustrate applications, a brief synopsis of the fundamentals for each topic is presented, the topics being organized to correlate with those found in traditional mathematical physics texts. The recipes are presented in the form of stories and anecdotes, a pedagogical approach that makes a mathematically challenging subject easier and more fun to learn. Key features: * Uses the MAPLE computer algebra system to allow the reader to easily and quickly change the mathematical models and the parameters and then generate new answers * No prior knowledge of MAPLE is assumed; the relevant MAPLE commands are introduced on a need-to-know basis * All MAPLE commands are indexed for easy reference * A classroom-tested story/anecdote format is use...

  6. Topics in radiation at accelerators: Radiation physics for personnel and environmental protection

    Energy Technology Data Exchange (ETDEWEB)

    Cossairt, J.D.

    1996-10-01

    In the first chapter, terminology, physical and radiological quantities, and units of measurement used to describe the properties of accelerator radiation fields are reviewed. The general considerations of primary radiation fields pertinent to accelerators are discussed. The primary radiation fields produced by electron beams are described qualitatively and quantitatively. In the same manner the primary radiation fields produced by proton and ion beams are described. Subsequent chapters describe: shielding of electrons and photons at accelerators; shielding of proton and ion accelerators; low energy prompt radiation phenomena; induced radioactivity at accelerators; topics in radiation protection instrumentation at accelerators; and accelerator radiation protection program elements.

  7. Grid computing in high energy physics

    CERN Document Server

    Avery, P

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software r...

  8. Limitation of computational resource as physical principle

    CERN Document Server

    Ozhigov, Y I

    2003-01-01

    Limitation of computational resources is considered as a universal principle that for simulation is as fundamental as physical laws are. It claims that all experimentally verifiable implications of physical laws can be simulated by the effective classical algorithms. It is demonstrated through a completely deterministic approach proposed for the simulation of biopolymers assembly. A state of molecule during its assembly is described in terms of the reduced density matrix permitting only limited tunneling. An assembly is treated as a sequence of elementary scatterings of simple molecules from the environment on the point of assembly. A decoherence is treated as a forced measurement of quantum state resulted from the shortage of computational resource. All results of measurements are determined by a choice from the limited number of special options of the nonphysical nature which stay unchanged till the completion of assembly; we do not use the random numbers generators. Observations of equal states during the ...

  9. MATLAB and ACS: Connecting two worlds of accelerator physics

    Energy Technology Data Exchange (ETDEWEB)

    Marsching, Sebastian; Fitterer, Miriam; Hillenbrand, Steffen; Hiller, Nicole; Hofmann, Andre; Klein, Marit; Sonnad, Kiran [Laboratorium fuer Applikationen der Synchrotronstrahlung, Universitaet Karlsruhe (Germany); Huttel, Erhard; Smale, Nigel [Institut fuer Synchrotronstrahlung, Forschungszentrum Karlsruhe (Germany); Mueller, Anke-Susanne [Laboratorium fuer Applikationen der Synchrotronstrahlung, Universitaet Karlsruhe (Germany); Institut fuer Synchrotronstrahlung, Forschungszentrum Karlsruhe (Germany)

    2009-07-01

    In the world of accelerator physics there is a vast amount of different software tools based on different platforms. At ANKA, the synchrotron radiation source at the Forschungszentrum Karlsruhe, a Java based software system is used to monitor and control the storage ring. While this system is based on ALMA Common Software, a component framework using CORBA and supporting Java, C++ and Python, many simulation tools are based on MATLAB and therefore no direct interoperation is possible. In order to integrate existing simulation tools with the control and monitoring system, a bridge that mediates between both worlds has been created. Thus simulation tools can use live data from the monitoring system and the control system can use simulation tools to improve automatic adjustment of operation parameters. This talk provides an insight into the concepts of this bridge approach and how it is used at ANKA to improve the beam quality for beam line users especially in the low-{alpha} mode providing coherent terahertz radiation.

  10. Computation of Normal Conducting and Superconducting Linear Accelerator (LINAC) Availabilities

    Energy Technology Data Exchange (ETDEWEB)

    Haire, M.J.

    2000-07-11

    A brief study was conducted to roughly estimate the availability of a superconducting (SC) linear accelerator (LINAC) as compared to a normal conducting (NC) one. Potentially, SC radio frequency cavities have substantial reserve capability, which allows them to compensate for failed cavities, thus increasing the availability of the overall LINAC. In the initial SC design, there is a klystron and associated equipment (e.g., power supply) for every cavity of an SC LINAC. On the other hand, a single klystron may service eight cavities in the NC LINAC. This study modeled that portion of the Spallation Neutron Source LINAC (between 200 and 1,000 MeV) that is initially proposed for conversion from NC to SC technology. Equipment common to both designs was not evaluated. Tabular fault-tree calculations and computer-event-driven simulation (EDS) computer computations were performed. The estimated gain in availability when using the SC option ranges from 3 to 13% under certain equipment and conditions and spatial separation requirements. The availability of an NC LINAC is estimated to be 83%. Tabular fault-tree calculations and computer EDS modeling gave the same 83% answer to within one-tenth of a percent for the NC case. Tabular fault-tree calculations of the availability of the SC LINAC (where a klystron and associated equipment drive a single cavity) give 97%, whereas EDS computer calculations give 96%, a disagreement of only 1%. This result may be somewhat fortuitous because of limitations of tabular fault-tree calculations. For example, tabular fault-tree calculations can not handle spatial effects (separation distance between failures), equipment network configurations, and some failure combinations. EDS computer modeling of various equipment configurations were examined. When there is a klystron and associated equipment for every cavity and adjacent cavity, failure can be tolerated and the SC availability was estimated to be 96%. SC availability decreased as

  11. Pathways through applied and computational physics

    CERN Document Server

    Barbero, Nicolò; Palmisano, Carlo; Zosi, Gianfranco

    2014-01-01

    This book is intended for undergraduates and young researchers who wish to understand the role that different branches of physics and mathematics play in the execution of actual experiments. The unique feature of the book is that all the subjects addressed are strictly interconnected within the context of the execution of a single experiment with very high accuracy, namely the redetermination of the Avogadro constant NA, one of the fundamental physical constants. The authors illustrate how the basic laws of physics are applied to describe the behavior of the quantities involved in the measurement of NA and explain the mathematical reasoning and computational tools that have been exploited. It is emphasized that all these quantities, although pertaining to a specific experiment, are of wide and general interest. The book is organized into chapters covering the interaction of electromagnetic radiation with single crystals, linear elasticity and anisotropy, propagation of thermal energy, anti-vibration mounting ...

  12. Physics, Computer Science and Mathematics Division. Annual report, January 1-December 31, 1980

    Energy Technology Data Exchange (ETDEWEB)

    Birge, R.W.

    1981-12-01

    Research in the physics, computer science, and mathematics division is described for the year 1980. While the division's major effort remains in high energy particle physics, there is a continually growing program in computer science and applied mathematics. Experimental programs are reported in e/sup +/e/sup -/ annihilation, muon and neutrino reactions at FNAL, search for effects of a right-handed gauge boson, limits on neutrino oscillations from muon-decay neutrinos, strong interaction experiments at FNAL, strong interaction experiments at BNL, particle data center, Barrelet moment analysis of ..pi..N scattering data, astrophysics and astronomy, earth sciences, and instrument development and engineering for high energy physics. In theoretical physics research, studies included particle physics and accelerator physics. Computer science and mathematics research included analytical and numerical methods, information analysis techniques, advanced computer concepts, and environmental and epidemiological studies. (GHT)

  13. Physics of new methods of charged particle acceleration collective effects in dense charged particle ensembles

    CERN Document Server

    Bonch-Osmolovsky, A G

    1994-01-01

    This volume discusses the theory of new methods of charged particle acceleration and its physical and mathematical descriptions. It examines some collective effects in dense charged particle ensembles, and traces the history of the development of the field of accelerator physics.

  14. Genetic algorithms and their applications in accelerator physics

    Energy Technology Data Exchange (ETDEWEB)

    Hofler, Alicia S. [JLAB

    2013-12-01

    Multi-objective optimization techniques are widely used in an extremely broad range of fields. Genetic optimization for multi-objective optimization was introduced in the accelerator community in relatively recent times and quickly spread becoming a fundamental tool in multi-dimensional optimization problems. This discussion introduces the basics of the technique and reviews applications in accelerator problems.

  15. 179th International School of Physics "Enrico Fermi" : Laser-Plasma Acceleration

    CERN Document Server

    Gizzi, L A; Faccini, R

    2012-01-01

    Impressive progress has been made in the field of laser-plasma acceleration in the last decade, with outstanding achievements from both experimental and theoretical viewpoints. Closely exploiting the development of ultra-intense, ultrashort pulse lasers, laser-plasma acceleration has developed rapidly, achieving accelerating gradients of the order of tens of GeV/m, and making the prospect of miniature accelerators a more realistic possibility. This book presents the lectures delivered at the Enrico Fermi International School of Physics and summer school: "Laser-Plasma Acceleration" , held in Varenna, Italy, in June 2011. The school provided an opportunity for young scientists to experience the best from the worlds of laser-plasma and accelerator physics, with intensive training and hands-on opportunities related to key aspects of laser-plasma acceleration. Subjects covered include: the secrets of lasers; the power of numerical simulations; beam dynamics; and the elusive world of laboratory plasmas. The object...

  16. Computer modeling of test particle acceleration at oblique shocks

    Science.gov (United States)

    Decker, Robert B.

    1988-01-01

    The present evaluation of the basic techniques and illustrative results of charged particle-modeling numerical codes suitable for particle acceleration at oblique, fast-mode collisionless shocks emphasizes the treatment of ions as test particles, calculating particle dynamics through numerical integration along exact phase-space orbits. Attention is given to the acceleration of particles at planar, infinitessimally thin shocks, as well as to plasma simulations in which low-energy ions are injected and accelerated at quasi-perpendicular shocks with internal structure.

  17. Computational and Physical Analysis of Catalytic Compounds

    Science.gov (United States)

    Wu, Richard; Sohn, Jung Jae; Kyung, Richard

    2015-03-01

    Nanoparticles exhibit unique physical and chemical properties depending on their geometrical properties. For this reason, synthesis of nanoparticles with controlled shape and size is important to use their unique properties. Catalyst supports are usually made of high-surface-area porous oxides or carbon nanomaterials. These support materials stabilize metal catalysts against sintering at high reaction temperatures. Many studies have demonstrated large enhancements of catalytic behavior due to the role of the oxide-metal interface. In this paper, the catalyzing ability of supported nano metal oxides, such as silicon oxide and titanium oxide compounds as catalysts have been analyzed using computational chemistry method. Computational programs such as Gamess and Chemcraft has been used in an effort to compute the efficiencies of catalytic compounds, and bonding energy changes during the optimization convergence. The result illustrates how the metal oxides stabilize and the steps that it takes. The graph of the energy computation step(N) versus energy(kcal/mol) curve shows that the energy of the titania converges faster at the 7th iteration calculation, whereas the silica converges at the 9th iteration calculation.

  18. An introduction to the Physics of High Energy Accelerators

    CERN Document Server

    Edwards, Donald A

    1993-01-01

    The first half deals with the motion of a single particle under the influence of electronic and magnetic fields. The basic language of linear and circular accelerators is developed. The principle of phase stability is introduced along with phase oscillations in linear accelerators and synchrotrons. Presents a treatment of betatron oscillations followed by an excursion into nonlinear dynamics and its application to accelerators. The second half discusses intensity dependent effects, particularly space charge and coherent instabilities. Includes tables of parameters for a selection of accelerato

  19. Computational applications of DNA physical scales

    DEFF Research Database (Denmark)

    Baldi, Pierre; Chauvin, Yves; Brunak, Søren

    1998-01-01

    The authors study from a computational standpoint several different physical scales associated with structural features of DNA sequences, including dinucleotide scales such as base stacking energy and propellor twist, and trinucleotide scales such as bendability and nucleosome positioning. We show...... that these scales provide an alternative or complementary compact representation of DNA sequences. As an example we construct a strand invariant representation of DNA sequences. The scales can also be used to analyze and discover new DNA structural patterns, especially in combinations with hidden Markov models...

  20. Physics and Novel Schemes of Laser Radiation Pressure Acceleration for Quasi-monoenergetic Proton Generation

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Chuan S. [Univ. of Maryland, College Park, MD (United States). Dept. of Physics; Shao, Xi [Univ. of Maryland, College Park, MD (United States)

    2016-06-14

    The main objective of our work is to provide theoretical basis and modeling support for the design and experimental setup of compact laser proton accelerator to produce high quality proton beams tunable with energy from 50 to 250 MeV using short pulse sub-petawatt laser. We performed theoretical and computational studies of energy scaling and Raleigh--Taylor instability development in laser radiation pressure acceleration (RPA) and developed novel RPA-based schemes to remedy/suppress instabilities for high-quality quasimonoenergetic proton beam generation as we proposed. During the project period, we published nine peer-reviewed journal papers and made twenty conference presentations including six invited talks on our work. The project supported one graduate student who received his PhD degree in physics in 2013 and supported two post-doctoral associates. We also mentored three high school students and one undergraduate student of physics major by inspiring their interests and having them involved in the project.

  1. Coulomb field of an accelerated charge physical and mathematical aspects

    CERN Document Server

    Alexander, F J; Alexander, Francis J.; Gerlach, Ulrich H.

    1991-01-01

    The Maxwell field equations relative to a uniformly accelerated frame, and the variational principle from which they are obtained, are formulated in terms of the technique of geometrical gauge invariant potentials. They refer to the transverse magnetic (TM) and the transeverse electric (TE) modes. This gauge invariant "2+2" decomposition is used to see how the Coulomb field of a charge, static in an accelerated frame, has properties that suggest features of electromagnetism which are different from those in an inertial frame. In particular, (1) an illustrative calculation shows that the Larmor radiation reaction equals the electrostatic attraction between the accelerated charge and the charge induced on the surface whose history is the event horizon, and (2) a spectral decomposition of the Coulomb potential in the accelerated frame suggests the possibility that the distortive effects of this charge on the Rindler vacuum are akin to those of a charge on a crystal lattice.

  2. Medical physics--particle accelerators--the beginning.

    Science.gov (United States)

    Ganz, Jeremy C

    2014-01-01

    This chapter outlines the early development of particle accelerators with the redesign from linear accelerator to cyclotron by Ernest Lawrence with a view to reducing the size of the machines as the power increased. There are minibiographies of Ernest Lawrence and his brother John. The concept of artificial radiation is outlined and the early attempts at patient treatment are mentioned. The reasons for trying and abandoning neutron therapy are discussed, and the early use of protons is described.

  3. Modern hardware architectures accelerate porous media flow computations

    Science.gov (United States)

    Kulczewski, Michal; Kurowski, Krzysztof; Kierzynka, Michal; Dohnalik, Marek; Kaczmarczyk, Jan; Borujeni, Ali Takbiri

    2012-05-01

    Investigation of rock properties, porosity and permeability particularly, which determines transport media characteristic, is crucial to reservoir engineering. Nowadays, micro-tomography (micro-CT) methods allow to obtain vast of petro-physical properties. The micro-CT method facilitates visualization of pores structures and acquisition of total porosity factor, determined by sticking together 2D slices of scanned rock and applying proper absorption cut-off point. Proper segmentation of pores representation in 3D is important to solve the permeability of porous media. This factor is recently determined by the means of Computational Fluid Dynamics (CFD), a popular method to analyze problems related to fluid flows, taking advantage of numerical methods and constantly growing computing powers. The recent advent of novel multi-, many-core and graphics processing unit (GPU) hardware architectures allows scientists to benefit even more from parallel processing and built-in new features. The high level of parallel scalability offers both, the time-to-solution decrease and greater accuracy - top factors in reservoir engineering. This paper aims to present research results related to fluid flow simulations, particularly solving the total porosity and permeability of porous media, taking advantage of modern hardware architectures. In our approach total porosity is calculated by the means of general-purpose computing on multiple GPUs. This application sticks together 2D slices of scanned rock and by the means of a marching tetrahedra algorithm, creates a 3D representation of pores and calculates the total porosity. Experimental results are compared with data obtained via other popular methods, including Nuclear Magnetic Resonance (NMR), helium porosity and nitrogen permeability tests. Then CFD simulations are performed on a large-scale high performance hardware architecture to solve the flow and permeability of porous media. In our experiments we used Lattice Boltzmann

  4. Experiments Using Cell Phones in Physics Classroom Education: The Computer-Aided "g" Determination

    Science.gov (United States)

    Vogt, Patrik; Kuhn, Jochen; Muller, Sebastian

    2011-01-01

    This paper continues the collection of experiments that describe the use of cell phones as experimental tools in physics classroom education. We describe a computer-aided determination of the free-fall acceleration "g" using the acoustical Doppler effect. The Doppler shift is a function of the speed of the source. Since a free-falling objects…

  5. Physics design of the DARHT 2nd axis accelerator cell

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y J; Houck, T L; Reginato, L J; Shang, C C; Yu, S S

    1999-08-19

    The next generation of radiographic machines based on induction accelerators require very high brightness electron beams to realize the desired x-ray spot size and intensity. This high brightness must be maintained throughout the beam transport, from source to x-ray converter target. The accelerator for the second-axis of the Dual Axis Radiographic Hydrodynamic Test (DARHT) facility is being designed to accelerate a 4-kA, 2-{micro}s pulse of electrons to 20 MeV. After acceleration, the 2-{micro}s pulse will be chopped into a train of four 50-ns pulses with variable temporal spacing by rapidly deflecting the beam between a beam stop and the final transport section. The short beam pulses will be focused onto an x-ray converter target generating four radiographic pulses within the 2-{micro}s window. Beam instability due to interaction with the accelerator cells can very adversely effect the beam brightness and radiographic pulse quality. This paper describes the various issues considered in the design of the accelerator cell with emphasis on transverse impedance and minimizing beam instabilities.

  6. Computing environmental life of electronic products based on failure physics

    Institute of Scientific and Technical Information of China (English)

    Yongqiang Zhang; Zongchang Xu; Chunyang Hu

    2016-01-01

    In some situations, the accelerated life test on en-vironmental stress for electronic products is not easily imple-mented due to various restrictions, and thus engineers are lacking of data of the product life test. Concerning this prob-lem, environmental life of the printed circuit board (PCB) board is calculated by way of physics of failure. Influences of thermal cycle and vibration on PCB and its components are studied. Based on the analysis of force and stress between components and the PCB board in thermal cycle events and vibration events, four life computing models of pins and sol-dered dots are established. The miler damage ratio is used to calculate the accumulated damage of a pin or a soldered dot, and then the environment life of the PCB board can be de-termined by the first failed one. Finaly, an example is used to ilustrate the models and their calculations.

  7. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.

    Directory of Open Access Journals (Sweden)

    Ye Fang

    Full Text Available Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU. First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.

  8. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing

    Science.gov (United States)

    Fang, Ye; Ding, Yun; Feinstein, Wei P.; Koppelman, David M.; Moreno, Juana; Jarrell, Mark; Ramanujam, J.; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249. PMID:27420300

  9. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.

    Science.gov (United States)

    Fang, Ye; Ding, Yun; Feinstein, Wei P; Koppelman, David M; Moreno, Juana; Jarrell, Mark; Ramanujam, J; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.

  10. Formation and Acceleration Physics on Plasma Injector 1

    Science.gov (United States)

    Howard, Stephen

    2012-10-01

    Plasma Injector 1 (PI-1) is a two stage coaxial Marshal gun with conical accelerator electrodes, similar in shape to the MARAUDER device, with power input of the same topology as the RACE device. The goal of PI-1 research is to produce a self-confined compact toroid with high-flux (200 mWb), high-density (3x10^16 cm-3) and moderate initial temperature (100 eV) to be used as the target plasma in a MTF reactor. PI-1 is 5 meters long and 1.9 m in diameter at the expansion region where a high aspect ratio (4.4) spheromak is formed with a minimum lambda of 9 m-1. The acceleration stage is 4 m long and tapers to an outer diameter of 40 cm. The capacitor banks store 0.5 MJ for formation and 1.13 MJ for acceleration. Power is delivered via 62 independently controlled switch modules. Several geometries for formation bias field, inner electrodes and target chamber have been tested, and trends in accelerator efficiency and target lifetime have been observed. Thomson scattering and ion Doppler spectroscopy show significant heating (>100 eV) as the CT is compressed in the conical accelerator. B-dot probes show magnetic field structure consistent with Grad-Shafranov models and MHD simulations, and CT axial length depends strongly on the lambda profile.

  11. GPU-based acceleration of free energy calculations in solid state physics

    CERN Document Server

    Januszewski, Michał; Crivelli, Dawid; Gardas, Bartłomiej

    2014-01-01

    Obtaining a thermodynamically accurate phase diagram through numerical calculations is a computationally expensive problem that is crucially important to understanding the complex phenomena of solid state physics, such as superconductivity. In this work we show how this type of analysis can be significantly accelerated through the use of modern GPUs. We illustrate this with a concrete example of free energy calculation in multi-band iron-based superconductors, known to exhibit a superconducting state with oscillating order parameter. Our approach can also be used for classical BCS-type superconductors. With a customized algorithm and compiler tuning we are able to achieve a 19x speedup compared to the CPU (119x compared to a single CPU core), reducing calculation time from minutes to mere seconds, enabling the analysis of larger systems and the elimination of finite size effects.

  12. Accelerating Computation of the Unit Commitment Problem (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Hummon, M.; Barrows, C.; Jones, W.

    2013-10-01

    Production cost models (PCMs) simulate power system operation at hourly (or higher) resolution. While computation times often extend into multiple days, the sequential nature of PCM's makes parallelism difficult. We exploit the persistence of unit commitment decisions to select partition boundaries for simulation horizon decomposition and parallel computation. Partitioned simulations are benchmarked against sequential solutions for optimality and computation time.

  13. New Physics at Low Accelerations (MOND): an Alternative to Dark Matter

    CERN Document Server

    Milgrom, Mordehai

    2009-01-01

    I describe the MOND paradigm, which posits a departure from standard physics below a certain acceleration scale. This acceleration as deduced from the dynamics in galaxies is found mysteriously to agree with the cosmic acceleration scales defined by the present day expansion rate and by the density of `dark energy'. I put special emphasis on phenomenology and on critical comparison with the competing paradigm based on classical dynamics plus cold dark matter. I also describe briefly nonrelativistic and relativistic MOND theories.

  14. BaBar computing - From collisions to physics results

    CERN Document Server

    CERN. Geneva

    2004-01-01

    The BaBar experiment at SLAC studies B-physics at the Upsilon(4S) resonance using the high-luminosity e+e- collider PEP-II at the Stanford Linear Accelerator Center (SLAC). Taking, processing and analyzing the very large data samples is a significant computing challenge. This presentation will describe the entire BaBar computing chain and illustrate the solutions chosen as well as their evolution with the ever higher luminosity being delivered by PEP-II. This will include data acquisition and software triggering in a high availability, low-deadtime online environment, a prompt, automated calibration pass through the data SLAC and then the full reconstruction of the data that takes place at INFN-Padova within 24 hours. Monte Carlo production takes place in a highly automated fashion in 25+ sites. The resulting real and simulated data is distributed and made available at SLAC and other computing centers. For analysis a much more sophisticated skimming pass has been introduced in the past year, ...

  15. Cloud Computing and Validated Learning for Accelerating Innovation in IoT

    Science.gov (United States)

    Suciu, George; Todoran, Gyorgy; Vulpe, Alexandru; Suciu, Victor; Bulca, Cristina; Cheveresan, Romulus

    2015-01-01

    Innovation in Internet of Things (IoT) requires more than just creation of technology and use of cloud computing or big data platforms. It requires accelerated commercialization or aptly called go-to-market processes. To successfully accelerate, companies need a new type of product development, the so-called validated learning process.…

  16. Accelerator System Model (ASM) user manual with physics and engineering model documentation. ASM version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-07-01

    The Accelerator System Model (ASM) is a computer program developed to model proton radiofrequency accelerators and to carry out system level trade studies. The ASM FORTRAN subroutines are incorporated into an intuitive graphical user interface which provides for the {open_quotes}construction{close_quotes} of the accelerator in a window on the computer screen. The interface is based on the Shell for Particle Accelerator Related Codes (SPARC) software technology written for the Macintosh operating system in the C programming language. This User Manual describes the operation and use of the ASM application within the SPARC interface. The Appendix provides a detailed description of the physics and engineering models used in ASM. ASM Version 1.0 is joint project of G. H. Gillespie Associates, Inc. and the Accelerator Technology (AT) Division of the Los Alamos National Laboratory. Neither the ASM Version 1.0 software nor this ASM Documentation may be reproduced without the expressed written consent of both the Los Alamos National Laboratory and G. H. Gillespie Associates, Inc.

  17. GpuCV : a GPU-accelerated framework for image processing and computer vision

    OpenAIRE

    ALLUSSE, Yannick; Horain, Patrick; Agarwal, Ankit; Saipriyadarshan, Cindula

    2008-01-01

    International audience; This paper presents briefly describes the state of the art of accelerating image processing with graphics hardware (GPU) and discusses some of its caveats. Then it describes GpuCV, an open source multi-platform library for GPU-accelerated image processing and Computer Vision operators and applications. It is meant for computer vision scientist not familiar with GPU technologies. GpuCV is designed to be compatible with the popular OpenCV library by offering GPU-accelera...

  18. CPUG: Computational Physics UG Degree Program at Oregon State University

    Science.gov (United States)

    Landau, Rubin H.

    2004-03-01

    A four-year undergraduate degree program leading to a Bachelor's degree in Computational Physics is described. The courses and texts under development are research- and Web-rich, and culminate in an advanced computational laboratory derived from graduate theses and faculty research. The five computational courses and course materials developed for this program act as a bridge connecting the physics with the computation and the mathematics, and as a link to the computational science community.

  19. Computational model of sustained acceleration effects on human cognitive performance.

    Science.gov (United States)

    McKinlly, Richard A; Gallimore, Jennie J

    2013-08-01

    Extreme acceleration maneuvers encountered in modern agile fighter aircraft can wreak havoc on human physiology, thereby significantly influencing cognitive task performance. As oxygen content declines under acceleration stress, the activity of high order cortical tissue reduces to ensure sufficient metabolic resources are available for critical life-sustaining autonomic functions. Consequently, cognitive abilities reliant on these affected areas suffer significant performance degradations. The goal was to develop and validate a model capable of predicting human cognitive performance under acceleration stress. Development began with creation of a proportional control cardiovascular model that produced predictions of several hemodynamic parameters, including eye-level blood pressure and regional cerebral oxygen saturation (rSo2). An algorithm was derived to relate changes in rSo2 within specific brain structures to performance on cognitive tasks that require engagement of different brain areas. Data from the "precision timing" experiment were then used to validate the model predicting cognitive performance as a function of G(z) profile. The following are value ranges. Results showed high agreement between the measured and predicted values for the rSo2 (correlation coefficient: 0.7483-0.8687; linear best-fit slope: 0.5760-0.9484; mean percent error: 0.75-3.33) and cognitive performance models (motion inference task--correlation coefficient: 0.7103-0.9451; linear best-fit slope: 0.7416-0.9144; mean percent error: 6.35-38.21; precision timing task--correlation coefficient: 0.6856-0.9726; linear best-fit slope: 0.5795-1.027; mean percent error: 6.30-17.28). The evidence suggests that the model is capable of accurately predicting cognitive performance of simplistic tasks under high acceleration stress.

  20. Overview of Accelerator Physics Studies and High Level Software for the Diamond Light Source

    CERN Document Server

    Bartolini, Riccardo; Belgroune, Mahdia; Christou, Chris; Holder, David J; Jones, James; Kempson, Vince; Martin, Ian; Rowland, James H; Singh, Beni; Smith, Susan L; Varley, Jennifer Anne; Wyles, Naomi

    2005-01-01

    DIAMOND is a 3 GeV synchrotron light source under construction at Rutherford Appleton Laboratory in Oxfordshire (UK). The accelerators complex consists of a 100 MeV LINAC, a full energy booster and a 3GeV storage ring with 22 straight sections available for IDs. Installation of all three accelerators has begun, and LINAC commissioning is due to start in Spring 2005. This paper will give an overview of the accelerator physics activity to produce final layouts and prepare for the commissioning of the accelerator complex. The DIAMOND facility is expected to be operational for users in 2007

  1. Cloud computing approaches to accelerate drug discovery value chain.

    Science.gov (United States)

    Garg, Vibhav; Arora, Suchir; Gupta, Chitra

    2011-12-01

    Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine.

  2. Accelerating patch-based directional wavelets with multicore parallel computing in compressed sensing MRI.

    Science.gov (United States)

    Li, Qiyue; Qu, Xiaobo; Liu, Yunsong; Guo, Di; Lai, Zongying; Ye, Jing; Chen, Zhong

    2015-06-01

    Compressed sensing MRI (CS-MRI) is a promising technology to accelerate magnetic resonance imaging. Both improving the image quality and reducing the computation time are important for this technology. Recently, a patch-based directional wavelet (PBDW) has been applied in CS-MRI to improve edge reconstruction. However, this method is time consuming since it involves extensive computations, including geometric direction estimation and numerous iterations of wavelet transform. To accelerate computations of PBDW, we propose a general parallelization of patch-based processing by taking the advantage of multicore processors. Additionally, two pertinent optimizations, excluding smooth patches and pre-arranged insertion sort, that make use of sparsity in MR images are also proposed. Simulation results demonstrate that the acceleration factor with the parallel architecture of PBDW approaches the number of central processing unit cores, and that pertinent optimizations are also effective to make further accelerations. The proposed approaches allow compressed sensing MRI reconstruction to be accomplished within several seconds.

  3. Acceleration of matrix element computations for precision measurements

    CERN Document Server

    Brandt, Oleg; Wang, Michael H L S; Ye, Zhenyu

    2014-01-01

    The matrix element technique provides a superior statistical sensitivity for precision measurements of important parameters at hadron colliders, such as the mass of the top quark or the cross section for the production of Higgs bosons. The main practical limitation of the technique is its high computational demand. Using the concrete example of the top quark mass, we present two approaches to reduce the computation time of the technique by two orders of magnitude. First, we utilize low-discrepancy sequences for numerical Monte Carlo integration in conjunction with a dedicated estimator of numerical uncertainty, a novelty in the context of the matrix element technique. Second, we utilize a new approach that factorizes the overall jet energy scale from the matrix element computation, a novelty in the context of top quark mass measurements. The utilization of low-discrepancy sequences is of particular general interest, as it is universally applicable to Monte Carlo integration, and independent of the computing e...

  4. Can low energy electrons affect high energy physics accelerators?

    CERN Document Server

    Cimino, R; Furman, M A; Pivi, M; Ruggiero, F; Rumolo, Giovanni; Zimmermann, Frank

    2004-01-01

    The properties of the electrons participating in the build up of an electron cloud (EC) inside the beam-pipe have become an increasingly important issue for present and future accelerators whose performance may be limited by this effect. The EC formation and evolution are determined by the wall-surface properties of the accelerator vacuum chamber. Thus, the accurate modeling of these surface properties is an indispensible input to simulation codes aimed at the correct prediction of build-up thresholds, electron-induced instability or EC heat load. In this letter, we present the results of surface measurements performed on a prototype of the beam screen adopted for the Large Hadron Collider (LHC), which presently is under construction at CERN. We have measured the total secondary electron yield (SEY) as well as the related energy distribution curves (EDC) of the secondary electrons as a function of incident electron energy. Attention has been paid, for the first time in this context, to the probability at whic...

  5. Computing support for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Avery, P.; Yelton, J. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-01

    This computing proposal (Task S) is submitted separately but in support of the High Energy Experiment (CLEO, Fermilab, CMS) and Theory tasks. The authors have built a very strong computing base at Florida over the past 8 years. In fact, computing has been one of the main contributions to their experimental collaborations, involving not just computing capacity for running Monte Carlos and data reduction, but participation in many computing initiatives, industrial partnerships, computing committees and collaborations. These facts justify the submission of a separate computing proposal.

  6. Accelerating Scientific Discovery Through Computation and Visualization III. Tight-Binding Wave Functions for Quantum Dots.

    Science.gov (United States)

    Sims, James S; George, William L; Griffin, Terence J; Hagedorn, John G; Hung, Howard K; Kelso, John T; Olano, Marc; Peskin, Adele P; Satterfield, Steven G; Terrill, Judith Devaney; Bryant, Garnett W; Diaz, Jose G

    2008-01-01

    This is the third in a series of articles that describe, through examples, how the Scientific Applications and Visualization Group (SAVG) at NIST has utilized high performance parallel computing, visualization, and machine learning to accelerate scientific discovery. In this article we focus on the use of high performance computing and visualization for simulations of nanotechnology.

  7. Accelerating Scientific Discovery Through Computation and Visualization III. Tight-Binding Wave Functions for Quantum Dots

    OpenAIRE

    2008-01-01

    This is the third in a series of articles that describe, through examples, how the Scientific Applications and Visualization Group (SAVG) at NIST has utilized high performance parallel computing, visualization, and machine learning to accelerate scientific discovery. In this article we focus on the use of high performance computing and visualization for simulations of nanotechnology.

  8. Accelerating Computation of DNA Sequence Alignment in Distributed Environment

    Science.gov (United States)

    Guo, Tao; Li, Guiyang; Deaton, Russel

    Sequence similarity and alignment are most important operations in computational biology. However, analyzing large sets of DNA sequence seems to be impractical on a regular PC. Using multiple threads with JavaParty mechanism, this project has successfully implemented in extending the capabilities of regular Java to a distributed environment for simulation of DNA computation. With the aid of JavaParty and the design of multiple threads, the results of this study demonstrated that the modified regular Java program could perform parallel computing without using RMI or socket communication. In this paper, an efficient method for modeling and comparing DNA sequences with dynamic programming and JavaParty was firstly proposed. Additionally, results of this method in distributed environment have been discussed.

  9. Using the mobile phone acceleration sensor in Physics experiments: free and damped harmonic oscillations

    CERN Document Server

    Castro-Palacio, Juan Carlos; Gimenez, Marcos H; Monsoriu, Juan A

    2012-01-01

    The mobile acceleration sensor has been used to in Physics experiments on free and damped oscillations. Results for the period, frequency, spring constant and damping constant match very well to measurements obtained by other methods. The Accelerometer Monitor application for Android has been used to get the outputs of the sensor. Perspectives for the Physics laboratory have also been discussed.

  10. Physics-Based Fragment Acceleration Modeling for Pressurized Tank Burst Risk Assessments

    Science.gov (United States)

    Manning, Ted A.; Lawrence, Scott L.

    2014-01-01

    As part of comprehensive efforts to develop physics-based risk assessment techniques for space systems at NASA, coupled computational fluid and rigid body dynamic simulations were carried out to investigate the flow mechanisms that accelerate tank fragments in bursting pressurized vessels. Simulations of several configurations were compared to analyses based on the industry-standard Baker explosion model, and were used to formulate an improved version of the model. The standard model, which neglects an external fluid, was found to agree best with simulation results only in configurations where the internal-to-external pressure ratio is very high and fragment curvature is small. The improved model introduces terms that accommodate an external fluid and better account for variations based on circumferential fragment count. Physics-based analysis was critical in increasing the model's range of applicability. The improved tank burst model can be used to produce more accurate risk assessments of space vehicle failure modes that involve high-speed debris, such as exploding propellant tanks and bursting rocket engines.

  11. Accelerating Innovation: How Nuclear Physics Benefits Us All

    Energy Technology Data Exchange (ETDEWEB)

    2011-01-01

    From fighting cancer to assuring food is safe to protecting our borders, nuclear physics impacts the lives of people around the globe every day. In learning about the nucleus of the atom and the forces that govern it, scientists develop a depth of knowledge, techniques and remarkable research tools that can be used to develop a variety of often unexpected, practical applications. These applications include devices and technologies for medical diagnostics and therapy, energy production and exploration, safety and national security, and for the analysis of materials and environmental contaminants. This brochure by the Office of Nuclear Physics of the USDOE Office of Science discusses nuclear physics and ways in which its applications fuel our economic vitality, and make the world and our lives safer and healthier.

  12. The physics design of accelerator-driven transmutation systems

    Energy Technology Data Exchange (ETDEWEB)

    Venneri, F.

    1995-02-01

    Nuclear systems under study in the Los Alamos Accelerator-Driven Transmutation Technology program (ADTT) will allow the destruction of nuclear spent fuel and weapons-return plutonium, as well as the production of nuclear energy from the thorium cycle, without a long-lived radioactive waste stream. The subcritical systems proposed represent a radical departure from traditional nuclear concepts (reactors), yet the actual implementation of ADTT systems is based on modest extrapolations of existing technology. These systems strive to keep the best that the nuclear technology has developed over the years, within a sensible conservative design envelope and eventually manage to offer a safer, less expensive and more environmentally sound approach to nuclear power.

  13. XXV IUPAP Conference on Computational Physics (CCP2013): Preface

    Science.gov (United States)

    2014-05-01

    XXV IUPAP Conference on Computational Physics (CCP2013) was held from 20-24 August 2013 at the Russian Academy of Sciences in Moscow, Russia. The annual Conferences on Computational Physics (CCP) present an overview of the most recent developments and opportunities in computational physics across a broad range of topical areas. The CCP series aims to draw computational scientists from around the world and to stimulate interdisciplinary discussion and collaboration by putting together researchers interested in various fields of computational science. It is organized under the auspices of the International Union of Pure and Applied Physics and has been in existence since 1989. The CCP series alternates between Europe, America and Asia-Pacific. The conferences are traditionally supported by European Physical Society and American Physical Society. This year the Conference host was Landau Institute for Theoretical Physics. The Conference contained 142 presentations, and, in particular, 11 plenary talks with comprehensive reviews from airbursts to many-electron systems. We would like to take this opportunity to thank our sponsors: International Union of Pure and Applied Physics (IUPAP), European Physical Society (EPS), Division of Computational Physics of American Physical Society (DCOMP/APS), Russian Foundation for Basic Research, Department of Physical Sciences of Russian Academy of Sciences, RSC Group company. Further conference information and images from the conference are available in the pdf.

  14. A Case Study: Novel Group Interactions through Introductory Computational Physics

    CERN Document Server

    Obsniuk, Michael J; Caballero, Marcos D

    2015-01-01

    With the advent of high-level programming languages capable of quickly rendering three-dimensional simulations, the inclusion of computers as a learning tool in the classroom has become more prevalent. Although work has begun to study the patterns seen in implementing and assessing computation in introductory physics, more insight is needed to understand the observed effects of blending computation with physics in a group setting. In a newly adopted format of introductory calculus-based mechanics, called Projects and Practices in Physics, groups of students work on short modeling projects -- which make use of a novel inquiry-based approach -- to develop their understanding of both physics content and practice. Preliminary analyses of observational data of groups engaging with computation, coupled with synchronized computer screencast, has revealed a unique group interaction afforded by the practices specific to computational physics -- problem debugging.

  15. Accelerating Missile Threat Engagement Simulations Using Personal Computer Graphics Cards

    Science.gov (United States)

    2005-03-01

    personal computer on the market today, have reached a level of power and programmability that enables them to be used as high performance stream...expected to continue at this rate for another five years, perhaps achieving tera-FLOP performance by 2005 [Mac03]. While the main, market -driven...JEFFERS // 11 nov 04 -- multiplies 1x1 scene by 8x8 reticle pallette, then does // 4:1 redux; results in RT that is quarter sized of

  16. Unified Compression-Based Acceleration of Edit-Distance Computation

    CERN Document Server

    Hermelin, Danny; Landau, Shir; Weimann, Oren

    2010-01-01

    The edit distance problem is a classical fundamental problem in computer science in general, and in combinatorial pattern matching in particular. The standard dynamic programming solution for this problem computes the edit-distance between a pair of strings of total length O(N) in O(N^2) time. To this date, this quadratic upper-bound has never been substantially improved for general strings. However, there are known techniques for breaking this bound in case the strings are known to compress well under a particular compression scheme. The basic idea is to first compress the strings, and then to compute the edit distance between the compressed strings. As it turns out, practically all known o(N^2) edit-distance algorithms work, in some sense, under the same paradigm described above. It is therefore natural to ask whether there is a single edit-distance algorithm that works for strings which are compressed under any compression scheme. A rephrasing of this question is to ask whether a single algorithm can explo...

  17. The computer simulation of laser proton acceleration for hadron therapy

    Science.gov (United States)

    Lykov, Vladimir; Baydin, Grigory

    2008-11-01

    The ions acceleration by intensive ultra-short laser pulses has interest in views of them possible applications for proton radiography, production of medical isotopes and hadron therapy. The 3D relativistic PIC-code LegoLPI is developed at RFNC-VNIITF for modeling of intensive laser interaction with plasma. The LegoLPI-code simulations were carried out to find the optimal conditions for generation of proton beams with parameters necessary for hadrons therapy. The performed simulations show that optimal for it may be two-layer foil of aluminum and polyethylene with thickness 100 nm and 50 nm accordingly. The maximum efficiency of laser energy transformation into 200 MeV protons is achieved on irradiating these foils by 30 fs laser pulse with intensity about 2.10^22 W/cm^2. The conclusion is made that lasers with peak power about 0.5-1PW and average power 0.5-1 kW are needed for generation of proton beams with parameters necessary for proton therapy.

  18. Computer control of large accelerators, design concepts and methods

    Science.gov (United States)

    Beck, F.; Gormley, M.

    1985-03-01

    Unlike most of the specialities treated in this volume, control system design is still an art, not a science. This presentation is an attempt to produce a primer for prospective practitioners of this art. A large modern accelerator requires a comprehensive control system for commissioning, machine studies, and day-to-day operation. Faced with the requirement to design a control system for such a machine, the control system architect has a bewildering array of technical devices and techniques at his disposal, and it is our aim in the following chapters to lead him through the characteristics of the problems he will have to face and the practical alternatives available for solving them. We emphasize good system architecture using commercially available hardware and software components, but in addition we discuss the actual control strategies which are to be implemented, since it is at the point of deciding what facilities shall be available that the complexity of the control system and its cost are implicitly decided.

  19. Report of the Subpanel on Accelerator Research and Development of the High Energy Physics Advisory Panel

    Energy Technology Data Exchange (ETDEWEB)

    1980-06-01

    Accelerator R and D in the US High Energy Physics (HEP) program is reviewed. As a result of this study, some shift in priority, particularly as regards long-range accelerator R and D, is suggested to best serve the future needs of the US HEP program. Some specific new directions for the US R and D effort are set forth. 18 figures, 5 tables. (RWR)

  20. Accelerator Layout and Physics of X-Ray Free-Electron Lasers

    CERN Document Server

    Decking, W

    2005-01-01

    X-ray Free-Electron Lasers facilities are planned or already under construction around the world. This talk covers the X-Ray Free-Electron Lasers LCLS (SLAC), European XFEL (DESY) and SCSS (Spring8). All aim for self-amplified spontaneous emission (SASE) FEL radiation of approximately 0.1 nm wavelengths. The required excellent electron beam qualities pose challenges to the accelerator physicists. Space charge forces, coherent synchrotron radiation and wakefields can deteriorate the beam quality. The accelerator physics and technological challenges behind each of the projects will be reviewed, covering the critical components low-emittance electron gun, bunch-compressors, accelerating structures and undulator systems.

  1. Golden Jubilee photos: Computers for physics

    CERN Multimedia

    2004-01-01

    CERN's first computer, a huge vacuum-tube Ferranti Mercury, was installed in building 2 in 1958. With its 60 microsecond clock cycle, it was a million times slower than today's big computers. The Mercury took 3 months to install and filled a huge room, even so, its computational ability didn't quite match that of a modern pocket calculator. "Mass" storage was provided by four magnetic drums each holding 32K x 20 bits - not enough to hold the data from a single proton-proton collision in the LHC. It was replaced in 1960 by the IBM 709 computer, seen here being unloaded at Cointrin airport. Although it was taken over so quickly by transistor equipped machines, a small part of the Ferranti Mercury remains. The computer's engineers installed a warning bell to signal computing errors - it can still be found mounted on the wall in a corridor of building 2.

  2. High energy physics advisory panel`s composite subpanel for the assessment of the status of accelerator physics and technology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-05-01

    In November 1994, Dr. Martha Krebs, Director of the US Department of Energy (DOE) Office of Energy Research (OER), initiated a broad assessment of the current status and promise of the field of accelerator physics and technology with respect to five OER programs -- High Energy Physics, Nuclear Physics, Basic Energy Sciences, Fusion Energy, and Health and Environmental Research. Dr. Krebs asked the High Energy Physics Advisory Panel (HEPAP) to establish a composite subpanel with representation from the five OER advisory committees and with a balance of membership drawn broadly from both the accelerator community and from those scientific disciplines associated with the OER programs. The Subpanel was also charged to provide recommendations and guidance on appropriate future research and development needs, management issues, and funding requirements. The Subpanel finds that accelerator science and technology is a vital and intellectually exciting field. It has provided essential capabilities for the DOE/OER research programs with an enormous impact on the nation`s scientific research, and it has significantly enhanced the nation`s biomedical and industrial capabilities. Further progress in this field promises to open new possibilities for the scientific goals of the OER programs and to further benefit the nation. Sustained support of forefront accelerator research and development by the DOE`s OER programs and the DOE`s predecessor agencies has been responsible for much of this impact on research. This report documents these contributions to the DOE energy research mission and to the nation.

  3. Computational Tools for Accelerating Carbon Capture Process Development

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David

    2013-01-01

    The goals of the work reported are: to develop new computational tools and models to enable industry to more rapidly develop and deploy new advanced energy technologies; to demonstrate the capabilities of the CCSI Toolset on non-proprietary case studies; and to deploy the CCSI Toolset to industry. Challenges of simulating carbon capture (and other) processes include: dealing with multiple scales (particle, device, and whole process scales); integration across scales; verification, validation, and uncertainty; and decision support. The tools cover: risk analysis and decision making; validated, high-fidelity CFD; high-resolution filtered sub-models; process design and optimization tools; advanced process control and dynamics; process models; basic data sub-models; and cross-cutting integration tools.

  4. Probing new physics scenarios in accelerator and reactor neutrino experiments

    Science.gov (United States)

    Di Iura, A.; Girardi, I.; Meloni, D.

    2015-06-01

    We perform a detailed combined fit to the {{\\bar{ν }}e}\\to {{\\bar{ν }}e} disappearence data of the Daya Bay experiment and the appearance {{ν }μ }\\to {{ν }e} and disappearance {{ν }μ }\\to {{ν }μ } data of the Tokai to Kamioka (T2K) one in the presence of two models of new physics affecting neutrino oscillations, namely a model where sterile neutrinos can propagate in a large compactified extra dimension and a model where non-standard interactions (NSI) affect the neutrino production and detection. We find that the Daya Bay ⨁ T2K data combination constrains the largest radius of the compactified extra dimensions to be R≲ 0.17 μm at 2σ C.L. (for the inverted ordering of the neutrino mass spectrum) and the relevant NSI parameters in the range O({{10}-3})-O({{10}-2}), for particular choices of the charge parity violating phases.

  5. Abstraction/Representation Theory for heterotic physical computing.

    Science.gov (United States)

    Horsman, D C

    2015-07-28

    We give a rigorous framework for the interaction of physical computing devices with abstract computation. Device and program are mediated by the non-logical representation relation; we give the conditions under which representation and device theory give rise to commuting diagrams between logical and physical domains, and the conditions for computation to occur. We give the interface of this new framework with currently existing formal methods, showing in particular its close relationship to refinement theory, and the implications for questions of meaning and reference in theoretical computer science. The case of hybrid computing is considered in detail, addressing in particular the example of an Internet-mediated social machine, and the abstraction/representation framework used to provide a formal distinction between heterotic and hybrid computing. This forms the basis for future use of the framework in formal treatments of non-standard physical computers.

  6. Particle accelerators from Big Bang physics to hadron therapy

    CERN Document Server

    Amaldi, Ugo

    2015-01-01

    The theoretical physicist Victor “Viki” Weisskopf, Director-General of CERN from 1961 to 1965, once “There are three kinds of physicists, namely the machine builders, the experimental physicists, and the theoretical physicists. […] The machine builders are the most important ones, because if they were not there, we would not get into this small-scale region of space. If we compare this with the discovery of America, the machine builders correspond to captains and ship builders who really developed the techniques at that time. The experimentalists were those fellows on the ships who sailed to the other side of the world and then landed on the new islands and wrote down what they saw. The theoretical physicists are those who stayed behind in Madrid and told Columbus that he was going to land in India.” Rather than focusing on the theoretical physicists, as most popular science books on particle physics do, this beautifully written and also entertaining book is different in that, firstly, the main foc...

  7. Physics of Double Pulse Irradiation of Targets For Proton Acceleration

    Science.gov (United States)

    Kerr, S.; Mo, M.; Masud, R.; Manzoor, L.; Tiedje, H.; Tsui, Y.; Fedosejevs, R.; Link, A.; Patel, P.; McLean, H.; Hazi, A.; Chen, H.; Ceurvorst, L.; Norreys, P.

    2016-10-01

    Experiments have been carried out on double-pulse irradiation of um-scale foil targets with varying preplasma conditions. Our experiment at the Titan Laser facility utilized two 700 fs, 1054 nm pulses, separated by 1 to 5 ps with a total energy of 100 J, and with 5-20% of the total energy contained within the first pulse. The proton spectra were measured with radiochromic film stacks and magnetic spectrometers. The prepulse energy was on the order of 10 mJ, which appears to have a moderating effect on the double pulse enhancement of proton beam. We have performed LSP PIC simulations to understand the double pulse enhancement mechanism, as well as the role of preplasma in modifying the interaction. A 1D parameter study was done to isolate various aspects of the interaction, while 2D simulations provide more detailed physical insight and a better comparison with experimental data. Work by the Univ. of Alberta was supported by the Natural Sciences and Engineering Research Council of Canada. Work by LLNL was performed under the auspices of U.S. DOE under contract DE-AC52-07NA27344.

  8. Atomic physics: A milestone in quantum computing

    Science.gov (United States)

    Bartlett, Stephen D.

    2016-08-01

    Quantum computers require many quantum bits to perform complex calculations, but devices with more than a few bits are difficult to program. A device based on five atomic quantum bits shows a way forward. See Letter p.63

  9. Computing Algorithms for Nuffield Advanced Physics.

    Science.gov (United States)

    Summers, M. K.

    1978-01-01

    Defines all recurrence relations used in the Nuffield course, to solve first- and second-order differential equations, and describes a typical algorithm for computer generation of solutions. (Author/GA)

  10. Physical design and cooling test of C-band standing wave accelerating tube

    Institute of Scientific and Technical Information of China (English)

    Bai Wei; Xu Zhou; Jin Xiao; Li Ming

    2006-01-01

    The physical design and cooling test of a C-band 2MeV standing wave (SW) accelerating tube are described in this paper. The designed accelerating structure consists of 3-cell buncher and 4-cell accelerating section with a total length of about 163mm , excited with 1MW magnetron. Dynamic simulation presents that about 150mA beam pulse current and 30% capture efficiency can be achieved. By means of nonlinear Gauss fit on electron transverse distribution, the diameter of beam spot FWHM (full width at half maximum of density distribution) is about 0.55mm. Cooling test results of the accelerating tube show that frequencies of cavities are tuned to 5527MHz and the field distribution of bunching section is about 3:9:10.

  11. Accelerating Computation of Large Biological Datasets using MapReduce Framework.

    Science.gov (United States)

    Wang, Chao; Dai, Dong; Li, Xi; Wang, Aili; Zhou, Xuehai

    2016-04-05

    The maximal information coefficient (MIC) has been proposed to discover relationships and associations between pairs of variables. It poses significant challenges for bioinformatics scientists to accelerate the MIC calculation, especially in genome sequencing and biological annotations. In this paper we explore a parallel approach which uses MapReduce framework to improve the computing efficiency and throughput of the MIC computation. The acceleration system includes biological data storage on HDFS, preprocessing algorithms, distributed memory cache mechanism, and the partition of MapReduce jobs. Based on the acceleration approach, we extend the traditional two-variable algorithm to multiple variables algorithm. The experimental results show that our parallel solution provides a linear speedup comparing with original algorithm without affecting the correctness and sensitivity.

  12. Physical Interpretation of the Schott Energy of An Accelerating Point Charge and the Question of Whether a Uniformly Accelerating Charge Radiates

    Science.gov (United States)

    Rowland, David R.

    2010-01-01

    A core topic in graduate courses in electrodynamics is the description of radiation from an accelerated charge and the associated radiation reaction. However, contemporary papers still express a diversity of views on the question of whether or not a uniformly accelerating charge radiates suggesting that a complete "physical" understanding of the…

  13. Accelerator Technology and High Energy Physic Experiments, WILGA 2012; EuCARD Sessions

    CERN Document Server

    Romaniuk, R S

    2012-01-01

    Wilga Sessions on HEP experiments, astroparticle physica and accelerator technology were organized under the umbrella of the EU FP7 Project EuCARD – European Coordination for Accelerator Research and Development. The paper is the second part (out of five) of the research survey of WILGA Symposium work, May 2012 Edition, concerned with accelerator technology and high energy physics experiments. It presents a digest of chosen technical work results shown by young researchers from different technical universities from this country during the XXXth Jubilee SPIE-IEEE Wilga 2012, May Edition, symposium on Photonics and Web Engineering. Topical tracks of the symposium embraced, among others, nanomaterials and nanotechnologies for photonics, sensory and nonlinear optical fibers, object oriented design of hardware, photonic metrology, optoelectronics and photonics applications, photonics-electronics co-design, optoelectronic and electronic systems for astronomy and high energy physics experiments, JET and pi-of-the ...

  14. On the computational capabilities of physical systems ; 2, relationship with conventional computer science

    CERN Document Server

    Wolpert, D H

    2000-01-01

    In the first of this pair of papers, it was proven that that no physical computer can correctly carry out all computational tasks that can be posed to it. The generality of this result follows from its use of a novel definition of computation, ``physical computation''. This second paper of the pair elaborates the mathematical structure and impossibility results associated with physical computation. Analogues of Chomsky hierarcy results concerning universal Turing Machines and the Halting theorem are derived, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analogue of algorithmic information complexity, ``prediction complexity'', is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different universal physical computers used to solve that task, a bound similar to the ``encoding'' bound governing how much the algorithm information complexity of a Turing machine calculation can...

  15. Adaptation and optimization of basic operations for an unstructured mesh CFD algorithm for computation on massively parallel accelerators

    Science.gov (United States)

    Bogdanov, P. B.; Gorobets, A. V.; Sukov, S. A.

    2013-08-01

    The design of efficient algorithms for large-scale gas dynamics computations with hybrid (heterogeneous) computing systems whose high performance relies on massively parallel accelerators is addressed. A high-order accurate finite volume algorithm with polynomial reconstruction on unstructured hybrid meshes is used to compute compressible gas flows in domains of complex geometry. The basic operations of the algorithm are implemented in detail for massively parallel accelerators, including AMD and NVIDIA graphics processing units (GPUs). Major optimization approaches and a computation transfer technique are covered. The underlying programming tool is the Open Computing Language (OpenCL) standard, which performs on accelerators of various architectures, both existing and emerging.

  16. Learning physics with a computer algebra system

    NARCIS (Netherlands)

    Savelsbergh, E.R.; Jong, de T.; Ferguson-Hessler, M.G.M.

    2000-01-01

    To become proficient problem-solvers, physics students need to form a coherent and flexible understanding of problem situations with which they are confronted. Still, many students have only a limited representation of the problems on which they are working. Therefore, an instructional approach was

  17. Computational Plasma Physics at the Bleeding Edge: Simulating Kinetic Turbulence Dynamics in Fusion Energy Sciences

    Science.gov (United States)

    Tang, William

    2013-04-01

    Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research in the 21st Century. The imperative is to translate the combination of the rapid advances in super-computing power together with the emergence of effective new algorithms and computational methodologies to help enable corresponding increases in the physics fidelity and the performance of the scientific codes used to model complex physical systems. If properly validated against experimental measurements and verified with mathematical tests and computational benchmarks, these codes can provide more reliable predictive capability for the behavior of complex systems, including fusion energy relevant high temperature plasmas. The magnetic fusion energy research community has made excellent progress in developing advanced codes for which computer run-time and problem size scale very well with the number of processors on massively parallel supercomputers. A good example is the effective usage of the full power of modern leadership class computational platforms from the terascale to the petascale and beyond to produce nonlinear particle-in-cell simulations which have accelerated progress in understanding the nature of plasma turbulence in magnetically-confined high temperature plasmas. Illustrative results provide great encouragement for being able to include increasingly realistic dynamics in extreme-scale computing campaigns to enable predictive simulations with unprecedented physics fidelity. Some illustrative examples will be presented of the algorithmic progress from the magnetic fusion energy sciences area in dealing with low memory per core extreme scale computing challenges for the current top 3 supercomputers worldwide. These include advanced CPU systems (such as the IBM-Blue-Gene-Q system and the Fujitsu K Machine) as well as the GPU-CPU hybrid system (Titan).

  18. Improving Quality of Service and Reducing Power Consumption with WAN accelerator in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    Shin-ichi Kuribayashi

    2013-02-01

    Full Text Available The widespread use of cloud computing services is expected to deteriorate a Quality of Service andtoincrease the power consumption of ICT devices, since the distance to a server becomes longer thanbefore. Migration of virtual machines over a wide area can solve many problems such as load balancingand power saving in cloud computing environments.This paper proposes to dynamically apply WAN accelerator within the network when a virtual machine ismoved to a distant center, in order to prevent the degradation in performance after live migration ofvirtual machines over a wide area. mSCTP-based data transfer using different TCP connections beforeand after migration is proposed in order to use a currently available WAN accelerator. This paper doesnot consider the performance degradation of live migration itself. Then, this paper proposes to reduce thepower consumption of ICT devices, which consists of installing WAN accelerators as part of cloudresources actively and increasing the packet transfer rate of communication link temporarily. It isdemonstrated that the power consumption with WAN accelerator could be reduced to one-tenth of thatwithout WAN accelerator.

  19. Ultrasound window-modulated compounding Nakagami imaging: Resolution improvement and computational acceleration for liver characterization.

    Science.gov (United States)

    Ma, Hsiang-Yang; Lin, Ying-Hsiu; Wang, Chiao-Yin; Chen, Chiung-Nien; Ho, Ming-Chih; Tsui, Po-Hsiang

    2016-08-01

    Ultrasound Nakagami imaging is an attractive method for visualizing changes in envelope statistics. Window-modulated compounding (WMC) Nakagami imaging was reported to improve image smoothness. The sliding window technique is typically used for constructing ultrasound parametric and Nakagami images. Using a large window overlap ratio may improve the WMC Nakagami image resolution but reduces computational efficiency. Therefore, the objectives of this study include: (i) exploring the effects of the window overlap ratio on the resolution and smoothness of WMC Nakagami images; (ii) proposing a fast algorithm that is based on the convolution operator (FACO) to accelerate WMC Nakagami imaging. Computer simulations and preliminary clinical tests on liver fibrosis samples (n=48) were performed to validate the FACO-based WMC Nakagami imaging. The results demonstrated that the width of the autocorrelation function and the parameter distribution of the WMC Nakagami image reduce with the increase in the window overlap ratio. One-pixel shifting (i.e., sliding the window on the image data in steps of one pixel for parametric imaging) as the maximum overlap ratio significantly improves the WMC Nakagami image quality. Concurrently, the proposed FACO method combined with a computational platform that optimizes the matrix computation can accelerate WMC Nakagami imaging, allowing the detection of liver fibrosis-induced changes in envelope statistics. FACO-accelerated WMC Nakagami imaging is a new-generation Nakagami imaging technique with an improved image quality and fast computation.

  20. Molecular dynamics-based virtual screening: accelerating the drug discovery process by high-performance computing.

    Science.gov (United States)

    Ge, Hu; Wang, Yu; Li, Chanjuan; Chen, Nanhao; Xie, Yufang; Xu, Mengyan; He, Yingyan; Gu, Xinchun; Wu, Ruibo; Gu, Qiong; Zeng, Liang; Xu, Jun

    2013-10-28

    High-performance computing (HPC) has become a state strategic technology in a number of countries. One hypothesis is that HPC can accelerate biopharmaceutical innovation. Our experimental data demonstrate that HPC can significantly accelerate biopharmaceutical innovation by employing molecular dynamics-based virtual screening (MDVS). Without using HPC, MDVS for a 10K compound library with tens of nanoseconds of MD simulations requires years of computer time. In contrast, a state of the art HPC can be 600 times faster than an eight-core PC server is in screening a typical drug target (which contains about 40K atoms). Also, careful design of the GPU/CPU architecture can reduce the HPC costs. However, the communication cost of parallel computing is a bottleneck that acts as the main limit of further virtual screening improvements for drug innovations.

  1. Numerical computation of special functions with applications to physics

    CSIR Research Space (South Africa)

    Motsepe, K

    2008-09-01

    Full Text Available Students of mathematical physics, engineering, natural and biological sciences sometimes need to use special functions that are not found in ordinary mathematical software. In this paper a simple universal numerical algorithm is developed to compute...

  2. Towards a Model for Computing in European Astroparticle Physics

    CERN Document Server

    Berghöfer, T; Allen, B; Beckmann, V; Chiarusi, T; Delfino, M; Hesping, S; Chudoba, J; Dell'Agnello, L; Katsanevas, S; Lamanna, G; Lemrani, R; Margiotta, A; Maron, G; Palomba, C; Russo, G; Wegner, P

    2015-01-01

    Current and future astroparticle physics experiments are operated or are being built to observe highly energetic particles, high energy electromagnetic radiation and gravitational waves originating from all kinds of cosmic sources. The data volumes taken by the experiments are large and expected to grow significantly during the coming years. This is a result of advanced research possibilities and improved detector technology. To cope with the substantially increasing data volumes of astroparticle physics projects it is important to understand the future needs for computing resources in this field. Providing these resources constitutes a larger fraction of the overall running costs of future infrastructures. This document presents the results of a survey made by APPEC with the help of computing experts of major projects and future initiatives in astroparticle physics, representatives of current Tier-1 and Tier-2 LHC computing centers, as well as specifically astroparticle physics computing centers, e.g. the Al...

  3. Intelligent physical blocks for introducing computer programming in developing countries

    CSIR Research Space (South Africa)

    Smith, Adrew C

    2007-05-01

    Full Text Available This paper reports on the evaluation of a novel affordable system that incorporates intelligent physical blocks to introduce illiterate children in developing countries to the logical thinking process required in computer programming. Both...

  4. Experimental, Theoretical and Computational Studies of Plasma-Based Concepts for Future High Energy Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Joshi, Chan [Univ. of California, Los Angeles, CA (United States); Mori, W. [Univ. of California, Los Angeles, CA (United States)

    2013-10-21

    This is the final report on the DOE grant number DE-FG02-92ER40727 titled, “Experimental, Theoretical and Computational Studies of Plasma-Based Concepts for Future High Energy Accelerators.” During this grant period the UCLA program on Advanced Plasma Based Accelerators, headed by Professor C. Joshi has made many key scientific advances and trained a generation of students, many of whom have stayed in this research field and even started research programs of their own. In this final report however, we will focus on the last three years of the grant and report on the scientific progress made in each of the four tasks listed under this grant. Four tasks are focused on: Plasma Wakefield Accelerator Research at FACET, SLAC National Accelerator Laboratory, In House Research at UCLA’s Neptune and 20 TW Laser Laboratories, Laser-Wakefield Acceleration (LWFA) in Self Guided Regime: Experiments at the Callisto Laser at LLNL, and Theory and Simulations. Major scientific results have been obtained in each of the four tasks described in this report. These have led to publications in the prestigious scientific journals, graduation and continued training of high quality Ph.D. level students and have kept the U.S. at the forefront of plasma-based accelerators research field.

  5. Transforming High School Physics with Modeling and Computation

    CERN Document Server

    Aiken, John M

    2013-01-01

    The Engage to Excel (PCAST) report, the National Research Council's Framework for K-12 Science Education, and the Next Generation Science Standards all call for transforming the physics classroom into an environment that teaches students real scientific practices. This work describes the early stages of one such attempt to transform a high school physics classroom. Specifically, a series of model-building and computational modeling exercises were piloted in a ninth grade Physics First classroom. Student use of computation was assessed using a proctored programming assignment, where the students produced and discussed a computational model of a baseball in motion via a high-level programming environment (VPython). Student views on computation and its link to mechanics was assessed with a written essay and a series of think-aloud interviews. This pilot study shows computation's ability for connecting scientific practice to the high school science classroom.

  6. High-Precision Computation and Mathematical Physics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.

    2008-11-03

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

  7. Computational Physics and Drug Discovery for Infectious Diseases

    Science.gov (United States)

    McCammon, J. Andrew

    2011-03-01

    This lecture will provide a general introduction to some of the ways that modern computational physics is contributing to the discovery of new pharmaceuticals, with special emphasis on drugs for infectious diseases. The basic sciences and computing technologies involved have advanced to the point that physics-based simulations of drug targets are now yielding truly valuable suggestions for new compounds. Supported in part by NSF, NIH, HHMI, CTBP, NBCR, and SDSC.

  8. Python: a language for computational physics

    Science.gov (United States)

    Borcherds, P. H.

    2007-07-01

    Python is a relatively new computing language, created by Guido van Rossum [A.S. Tanenbaum, R. van Renesse, H. van Staveren, G.J. Sharp, S.J. Mullender, A.J. Jansen, G. van Rossum, Experiences with the Amoeba distributed operating system, Communications of the ACM 33 (1990) 46-63; also on-line at http://www.cs.vu.nl/pub/amoeba/. [6

  9. Future accelerators (?)

    Energy Technology Data Exchange (ETDEWEB)

    John Womersley

    2003-08-21

    I describe the future accelerator facilities that are currently foreseen for electroweak scale physics, neutrino physics, and nuclear structure. I will explore the physics justification for these machines, and suggest how the case for future accelerators can be made.

  10. Beam polarization at the ILC. The physics impact and the accelerator solutions

    Energy Technology Data Exchange (ETDEWEB)

    Aurand, B. [Bonn Univ. (Germany). Phys. Inst.; Bailey, I. [Liverpool Univ. (United Kingdom). Cockcroft Inst.; Bartels, C. [DESY, Hamburg (Germany); DESY, Zeuthen (DE)] (and others)

    2009-03-15

    In this contribution accelerator solutions for polarized beams and their impact on physics measurements are discussed. Focus are physics requirements for precision polarimetry near the interaction point and their realization with polarized sources. Based on the ILC baseline programme as described in the Reference Design Report (RDR), recent developments are discussed and evaluated taking into account physics runs at beam energies between 100 GeV and 250 GeV, as well as calibration runs on the Z-pole and options as the 1 TeV upgrade and GigaZ. (orig.)

  11. Effects of different computer typing speeds on acceleration and peak contact pressure of the fingertips during computer typing.

    Science.gov (United States)

    Yoo, Won-Gyu

    2015-01-01

    [Purpose] This study showed the effects of different computer typing speeds on acceleration and peak contact pressure of the fingertips during computer typing. [Subjects] Twenty-one male computer workers voluntarily consented to participate in this study. They consisted of 7 workers who could type 200-300 characteristics/minute, 7 workers who could type 300-400 characteristics/minute, and 7 workers who could type 400-500 chracteristics/minute. [Methods] This study was used to measure the acceleration and peak contact pressure of the fingertips for different typing speed groups using an accelerometer and CONFORMat system. [Results] The fingertip contact pressure was increased in the high typing speed group compared with the low and medium typing speed groups. The fingertip acceleration was increased in the high typing speed group compared with the low and medium typing speed groups. [Conclusion] The results of the present study indicate that a fast typing speed cause continuous pressure stress to be applied to the fingers, thereby creating pain in the fingers.

  12. Neural Computing in High Energy Physics

    Institute of Scientific and Technical Information of China (English)

    O.D.Joukov; N.D.Rishe

    2001-01-01

    Artifical neural networks (ANN) are now widely used successfully as tools for hith energy physics.The paper covers two aspects.First,mapping ANNs onto the proposed ring and linear systolic array provides an efficient implementation of VLSI-based architectures since in this case all connections among processing elements are local and regular,Second.it is discussed algorthmic organizing of such structures on the base of modular algebra whose use can provide an essential increase of system throughput.

  13. Acceleration of Radiance for Lighting Simulation by Using Parallel Computing with OpenCL

    Energy Technology Data Exchange (ETDEWEB)

    Zuo, Wangda; McNeil, Andrew; Wetter, Michael; Lee, Eleanor

    2011-09-06

    We report on the acceleration of annual daylighting simulations for fenestration systems in the Radiance ray-tracing program. The algorithm was optimized to reduce both the redundant data input/output operations and the floating-point operations. To further accelerate the simulation speed, the calculation for matrix multiplications was implemented using parallel computing on a graphics processing unit. We used OpenCL, which is a cross-platform parallel programming language. Numerical experiments show that the combination of the above measures can speed up the annual daylighting simulations 101.7 times or 28.6 times when the sky vector has 146 or 2306 elements, respectively.

  14. Acceleration of the matrix multiplication of Radiance three phase daylighting simulations with parallel computing on heterogeneous hardware of personal computer

    Energy Technology Data Exchange (ETDEWEB)

    Zuo, Wangda [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wetter, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lee, Eleanor S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2013-05-23

    Building designers are increasingly relying on complex fenestration systems to reduce energy consumed for lighting and HVAC in low energy buildings. Radiance, a lighting simulation program, has been used to conduct daylighting simulations for complex fenestration systems. Depending on the configurations, the simulation can take hours or even days using a personal computer. This paper describes how to accelerate the matrix multiplication portion of a Radiance three-phase daylight simulation by conducting parallel computing on heterogeneous hardware of a personal computer. The algorithm was optimized and the computational part was implemented in parallel using OpenCL. The speed of new approach was evaluated using various daylighting simulation cases on a multicore central processing unit and a graphics processing unit. Based on the measurements and analysis of the time usage for the Radiance daylighting simulation, further speedups can be achieved by using fast I/O devices and storing the data in a binary format.

  15. Accelerator and detector physics at the Bern medical cyclotron and its beam transport line

    Directory of Open Access Journals (Sweden)

    Auger Martin

    2016-03-01

    Full Text Available The cyclotron laboratory for radioisotope production and multi-disciplinary research at the Bern University Hospital (Inselspital is based on an 18-MeV proton accelerator, equipped with a specifically conceived 6-m long external beam line, ending in a separate bunker. This facility allows performing daily positron emission tomography (PET radioisotope production and research activities running in parallel. Some of the latest developments on accelerator and detector physics are reported. They encompass novel detectors for beam monitoring and studies of low current beams.

  16. Ultraviolet Coronagraph Spectroscopy: A Key Capability for Understanding the Physics of Solar Wind Acceleration

    CERN Document Server

    Cranmer, S R; Alexander, D; Bhattacharjee, A; Breech, B A; Brickhouse, N S; Chandran, B D G; Dupree, A K; Esser, R; Gary, S P; Hollweg, J V; Isenberg, P A; Kahler, S W; Ko, Y -K; Laming, J M; Landi, E; Matthaeus, W H; Murphy, N A; Oughton, S; Raymond, J C; Reisenfeld, D B; Suess, S T; van Ballegooijen, A A; Wood, B E

    2010-01-01

    Understanding the physical processes responsible for accelerating the solar wind requires detailed measurements of the collisionless plasma in the extended solar corona. Some key clues about these processes have come from instruments that combine the power of an ultraviolet (UV) spectrometer with an occulted telescope. This combination enables measurements of ion emission lines far from the bright solar disk, where most of the solar wind acceleration occurs. Although the UVCS instrument on SOHO made several key discoveries, many questions remain unanswered because its capabilities were limited. This white paper summarizes these past achievements and also describes what can be accomplished with next-generation instrumentation of this kind.

  17. Physical Activities Monitoring Using Wearable Acceleration Sensors Attached to the Body.

    Science.gov (United States)

    Arif, Muhammad; Kattan, Ahmed

    2015-01-01

    Monitoring physical activities by using wireless sensors is helpful for identifying postural orientation and movements in the real-life environment. A simple and robust method based on time domain features to identify the physical activities is proposed in this paper; it uses sensors placed on the subjects' wrist, chest and ankle. A feature set based on time domain characteristics of the acceleration signal recorded by acceleration sensors is proposed for the classification of twelve physical activities. Nine subjects performed twelve different types of physical activities, including sitting, standing, walking, running, cycling, Nordic walking, ascending stairs, descending stairs, vacuum cleaning, ironing clothes and jumping rope, and lying down (resting state). Their ages were 27.2 ± 3.3 years and their body mass index (BMI) is 25.11 ± 2.6 Kg/m2. Classification results demonstrated a high validity showing precision (a positive predictive value) and recall (sensitivity) of more than 95% for all physical activities. The overall classification accuracy for a combined feature set of three sensors is 98%. The proposed framework can be used to monitor the physical activities of a subject that can be very useful for the health professional to assess the physical activity of healthy individuals as well as patients.

  18. Physical Activities Monitoring Using Wearable Acceleration Sensors Attached to the Body.

    Directory of Open Access Journals (Sweden)

    Muhammad Arif

    Full Text Available Monitoring physical activities by using wireless sensors is helpful for identifying postural orientation and movements in the real-life environment. A simple and robust method based on time domain features to identify the physical activities is proposed in this paper; it uses sensors placed on the subjects' wrist, chest and ankle. A feature set based on time domain characteristics of the acceleration signal recorded by acceleration sensors is proposed for the classification of twelve physical activities. Nine subjects performed twelve different types of physical activities, including sitting, standing, walking, running, cycling, Nordic walking, ascending stairs, descending stairs, vacuum cleaning, ironing clothes and jumping rope, and lying down (resting state. Their ages were 27.2 ± 3.3 years and their body mass index (BMI is 25.11 ± 2.6 Kg/m2. Classification results demonstrated a high validity showing precision (a positive predictive value and recall (sensitivity of more than 95% for all physical activities. The overall classification accuracy for a combined feature set of three sensors is 98%. The proposed framework can be used to monitor the physical activities of a subject that can be very useful for the health professional to assess the physical activity of healthy individuals as well as patients.

  19. Physical Activities Monitoring Using Wearable Acceleration Sensors Attached to the Body

    Science.gov (United States)

    2015-01-01

    Monitoring physical activities by using wireless sensors is helpful for identifying postural orientation and movements in the real-life environment. A simple and robust method based on time domain features to identify the physical activities is proposed in this paper; it uses sensors placed on the subjects’ wrist, chest and ankle. A feature set based on time domain characteristics of the acceleration signal recorded by acceleration sensors is proposed for the classification of twelve physical activities. Nine subjects performed twelve different types of physical activities, including sitting, standing, walking, running, cycling, Nordic walking, ascending stairs, descending stairs, vacuum cleaning, ironing clothes and jumping rope, and lying down (resting state). Their ages were 27.2 ± 3.3 years and their body mass index (BMI) is 25.11 ± 2.6 Kg/m2. Classification results demonstrated a high validity showing precision (a positive predictive value) and recall (sensitivity) of more than 95% for all physical activities. The overall classification accuracy for a combined feature set of three sensors is 98%. The proposed framework can be used to monitor the physical activities of a subject that can be very useful for the health professional to assess the physical activity of healthy individuals as well as patients. PMID:26203909

  20. The Competence of Physical Education Teachers in Computer Use

    Science.gov (United States)

    Yaman, Metin

    2007-01-01

    Computer-based and web-based applications are primary educational tools that are used in order to motivate students in today's schools. In the physical education field, educational applications related with the computer and the internet became more prevalent in order to present visual and interactive learning processes. On the other hand, some…

  1. Elementary computer physics, a concentrated one-week course

    DEFF Research Database (Denmark)

    Christiansen, Gunnar Dan

    1978-01-01

    A concentrated one-week course (8 hours per day in 5 days) in elementary computer physics for students in their freshman university year is described. The aim of the course is to remove the constraints on traditional physics courses imposed by the necessity of only dealing with problems that have...

  2. Assessing the Integration of Computational Modeling and ASU Modeling Instruction in the High School Physics Classroom

    Science.gov (United States)

    Aiken, John; Schatz, Michael; Burk, John; Caballero, Marcos; Thoms, Brian

    2012-03-01

    We describe the assessment of computational modeling in a ninth grade classroom in the context of the Arizona Modeling Instruction physics curriculum. Using a high-level programming environment (VPython), students develop computational models to predict the motion of objects under a variety of physical situations (e.g., constant net force), to simulate real world phenomenon (e.g., car crash), and to visualize abstract quantities (e.g., acceleration). The impact of teaching computation is evaluated through a proctored assignment that asks the students to complete a provided program to represent the correct motion. Using questions isomorphic to the Force Concept Inventory we gauge students understanding of force in relation to the simulation. The students are given an open ended essay question that asks them to explain the steps they would use to model a physical situation. We also investigate the attitudes and prior experiences of each student using the Computation Modeling in Physics Attitudinal Student Survey (COMPASS) developed at Georgia Tech as well as a prior computational experiences survey.

  3. Computer Self-Efficacy, Computer Anxiety, Performance and Personal Outcomes of Turkish Physical Education Teachers

    Science.gov (United States)

    Aktag, Isil

    2015-01-01

    The purpose of this study is to determine the computer self-efficacy, performance outcome, personal outcome, and affect and anxiety level of physical education teachers. Influence of teaching experience, computer usage and participation of seminars or in-service programs on computer self-efficacy level were determined. The subjects of this study…

  4. Computer Self-Efficacy, Computer Anxiety, Performance and Personal Outcomes of Turkish Physical Education Teachers

    Science.gov (United States)

    Aktag, Isil

    2015-01-01

    The purpose of this study is to determine the computer self-efficacy, performance outcome, personal outcome, and affect and anxiety level of physical education teachers. Influence of teaching experience, computer usage and participation of seminars or in-service programs on computer self-efficacy level were determined. The subjects of this study…

  5. Accelerated multidimensional radiofrequency pulse design for parallel transmission using concurrent computation on multiple graphics processing units.

    Science.gov (United States)

    Deng, Weiran; Yang, Cungeng; Stenger, V Andrew

    2011-02-01

    Multidimensional radiofrequency (RF) pulses are of current interest because of their promise for improving high-field imaging and for optimizing parallel transmission methods. One major drawback is that the computation time of numerically designed multidimensional RF pulses increases rapidly with their resolution and number of transmitters. This is critical because the construction of multidimensional RF pulses often needs to be in real time. The use of graphics processing units for computations is a recent approach for accelerating image reconstruction applications. We propose the use of graphics processing units for the design of multidimensional RF pulses including the utilization of parallel transmitters. Using a desktop computer with four NVIDIA Tesla C1060 computing processors, we found acceleration factors on the order of 20 for standard eight-transmitter two-dimensional spiral RF pulses with a 64 × 64 excitation resolution and a 10-μsec dwell time. We also show that even greater acceleration factors can be achieved for more complex RF pulses. Copyright © 2010 Wiley-Liss, Inc.

  6. Convergence acceleration for vector sequences and applications to computational fluid dynamics

    Science.gov (United States)

    Sidi, Avram; Celestina, Mark L.

    1990-01-01

    Some recent developments in acceleration of convergence methods for vector sequences are reviewed. The methods considered are the minimal polynomial extrapolation, the reduced rank extrapolation, and the modified minimal polynomial extrapolation. The vector sequences to be accelerated are those that are obtained from the iterative solution of linear or nonlinear systems of equations. The convergence and stability properties of these methods as well as different ways of numerical implementation are discussed in detail. Based on the convergence and stability results, strategies that are useful in practical applications are suggested. Two applications to computational fluid mechanics involving the three dimensional Euler equations for ducted and external flows are considered. The numerical results demonstrate the usefulness of the methods in accelerating the convergence of the time marching techniques in the solution of steady state problems.

  7. A contribution to the computation of the impedance in acceleration resonators

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Cong

    2016-05-15

    This thesis is focusing on the numerical computation of the impedance in acceleration resonators and corresponding components. For this purpose, a dedicated solver based on the Finite Element Method (FEM) has been developed to compute the broadband impedance in accelerating components. In addition, various numerical approaches have been used to calculate the narrow-band impedance in superconducting radio frequency (RF) cavities. From that an overview of the calculated results as well as the comparisons between the applied numerical approaches is provided. During the design phase of superconducting RF accelerating cavities and components, a challenging and difficult task is the determination of the impedance inside the accelerators with the help of proper computer simulations. Impedance describes the electromagnetic interaction between the particle beam and the accelerators. It can affect the stability of the particle beam. For a superconducting RF accelerating cavity with waveguides (beam pipes and couplers), the narrow-band impedance, which is also called shunt impedance, corresponds to the eigenmodes of the cavity. It depends on the eigenfrequencies and its electromagnetic field distribution of the eigenmodes inside the cavity. On the other hand, the broadband impedance describes the interaction of the particle beam in the waveguides with its environment at arbitrary frequency and beam velocity. With the narrow-band and broadband impedance the detailed knowledges of the impedance for the accelerators can be given completely. In order to calculate the broadband longitudinal space charge impedance for acceleration components, a three-dimensional (3D) solver based on the FEM in frequency domain has been developed. To calculate the narrow-band impedance for superconducting RF cavities, we used various numerical approaches. Firstly, the eigenmode solver based on Finite Integration Technique (FIT) and a parallel real-valued FEM (CEM3Dr) eigenmode solver based on

  8. Physical Optics Based Computational Imaging Systems

    Science.gov (United States)

    Olivas, Stephen Joseph

    There is an ongoing demand on behalf of the consumer, medical and military industries to make lighter weight, higher resolution, wider field-of-view and extended depth-of-focus cameras. This leads to design trade-offs between performance and cost, be it size, weight, power, or expense. This has brought attention to finding new ways to extend the design space while adhering to cost constraints. Extending the functionality of an imager in order to achieve extraordinary performance is a common theme of computational imaging, a field of study which uses additional hardware along with tailored algorithms to formulate and solve inverse problems in imaging. This dissertation details four specific systems within this emerging field: a Fiber Bundle Relayed Imaging System, an Extended Depth-of-Focus Imaging System, a Platform Motion Blur Image Restoration System, and a Compressive Imaging System. The Fiber Bundle Relayed Imaging System is part of a larger project, where the work presented in this thesis was to use image processing techniques to mitigate problems inherent to fiber bundle image relay and then, form high-resolution wide field-of-view panoramas captured from multiple sensors within a custom state-of-the-art imager. The Extended Depth-of-Focus System goals were to characterize the angular and depth dependence of the PSF of a focal swept imager in order to increase the acceptably focused imaged scene depth. The goal of the Platform Motion Blur Image Restoration System was to build a system that can capture a high signal-to-noise ratio (SNR), long-exposure image which is inherently blurred while at the same time capturing motion data using additional optical sensors in order to deblur the degraded images. Lastly, the objective of the Compressive Imager was to design and build a system functionally similar to the Single Pixel Camera and use it to test new sampling methods for image generation and to characterize it against a traditional camera. These computational

  9. From the Web to the Grid and beyond computing paradigms driven by high-energy physics

    CERN Document Server

    Carminati, Federico; Galli-Carminati, Giuliana

    2012-01-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the ...

  10. Computational physics simulation of classical and quantum systems

    CERN Document Server

    Scherer, Philipp O J

    2013-01-01

    This textbook presents basic and advanced computational physics in a very didactic style. It contains very-well-presented and simple mathematical descriptions of many of the most important algorithms used in computational physics. Many clear mathematical descriptions of important techniques in computational physics are given. The first part of the book discusses the basic numerical methods. A large number of exercises and computer experiments allows to study the properties of these methods. The second part concentrates on simulation of classical and quantum systems. It uses a rather general concept for the equation of motion which can be applied to ordinary and partial differential equations. Several classes of integration methods are discussed including not only the standard Euler and Runge Kutta method but also multistep methods and the class of Verlet methods which is introduced by studying the motion in Liouville space. Besides the classical methods, inverse interpolation is discussed, together with the p...

  11. Computational intelligence for decision support in cyber-physical systems

    CERN Document Server

    Ali, A; Riaz, Zahid

    2014-01-01

    This book is dedicated to applied computational intelligence and soft computing techniques with special reference to decision support in Cyber Physical Systems (CPS), where the physical as well as the communication segment of the networked entities interact with each other. The joint dynamics of such systems result in a complex combination of computers, software, networks and physical processes all combined to establish a process flow at system level. This volume provides the audience with an in-depth vision about how to ensure dependability, safety, security and efficiency in real time by making use of computational intelligence in various CPS applications ranging from the nano-world to large scale wide area systems of systems. Key application areas include healthcare, transportation, energy, process control and robotics where intelligent decision support has key significance in establishing dynamic, ever-changing and high confidence future technologies. A recommended text for graduate students and researche...

  12. Coupled computation method of physics fields in aluminum reduction cells

    Institute of Scientific and Technical Information of China (English)

    周乃君; 梅炽; 姜昌伟; 周萍; 李劼

    2003-01-01

    Considering importance of study on physics fields and computer simulation for aluminum reduction cells so as to optimize design on aluminum reduction cells and develop new type of cells, based on analyzing coupled relation of physics fields in aluminum reduction cells, the mathematics and physics models were established and a coupled computation method on distribution of electric current and magnetic field, temperature profile and metal velocity in cells was developed. The computational results in 82kA prebaked cells agree well with the measured results, and the errors of maxium value calculated for three main physics property fields are less than 10%, which proves that the model and arithmetic are available. So the software developed can be not only applied to optimization design on traditional aluminum reduction cells, but also to establishing better technology basis to develop new drained aluminum reduction cells.

  13. FINAL REPORT DE-FG02-04ER41317 Advanced Computation and Chaotic Dynamics for Beams and Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Cary, John R [U. Colorado

    2014-09-08

    During the year ending in August 2013, we continued to investigate the potential of photonic crystal (PhC) materials for acceleration purposes. We worked to characterize acceleration ability of simple PhC accelerator structures, as well as to characterize PhC materials to determine whether current fabrication techniques can meet the needs of future accelerating structures. We have also continued to design and optimize PhC accelerator structures, with the ultimate goal of finding a new kind of accelerator structure that could offer significant advantages over current RF acceleration technology. This design and optimization of these requires high performance computation, and we continue to work on methods to make such computation faster and more efficient.

  14. Fast crustal deformation computing method for multiple computations accelerated by a graphics processing unit cluster

    Science.gov (United States)

    Yamaguchi, Takuma; Ichimura, Tsuyoshi; Yagi, Yuji; Agata, Ryoichiro; Hori, Takane; Hori, Muneo

    2017-08-01

    As high-resolution observational data become more common, the demand for numerical simulations of crustal deformation using 3-D high-fidelity modelling is increasing. To increase the efficiency of performing numerical simulations with high computation costs, we developed a fast solver using heterogeneous computing, with graphics processing units (GPUs) and central processing units, and then used the solver in crustal deformation computations. The solver was based on an iterative solver and was devised so that a large proportion of the computation was calculated more quickly using GPUs. To confirm the utility of the proposed solver, we demonstrated a numerical simulation of the coseismic slip distribution estimation, which requires 360 000 crustal deformation computations with 82 196 106 degrees of freedom.

  15. Proceedings of the conference on computer codes and the linear accelerator community

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, R.K. (comp.)

    1990-07-01

    The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned.

  16. A Low-Power Scalable Stream Compute Accelerator for General Matrix Multiply (GEMM

    Directory of Open Access Journals (Sweden)

    Antony Savich

    2014-01-01

    play an important role in determining the performance of such applications. This paper proposes a novel efficient, highly scalable hardware accelerator that is of equivalent performance to a 2 GHz quad core PC but can be used in low-power applications targeting embedded systems requiring high performance computation. Power, performance, and resource consumption are demonstrated on a fully-functional prototype. The proposed hardware accelerator is 36× more energy efficient per unit of computation compared to state-of-the-art Xeon processor of equal vintage and is 14× more efficient as a stand-alone platform with equivalent performance. An important comparison between simulated system estimates and real system performance is carried out.

  17. On the Computational Capabilities of Physical Systems. Part 1; The Impossibility of Infallible Computation

    Science.gov (United States)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In this first of two papers, strong limits on the accuracy of physical computation are established. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out any computational task in the subset of such tasks that can be posed to C. This result holds whether the computational tasks concern a system that is physically isolated from C, or instead concern a system that is coupled to C. As a particular example, this result means that there cannot be a physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly 'processing information faster than the universe does'. The results also mean that there cannot exist an infallible, general-purpose observation apparatus, and that there cannot be an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - a definition of 'physical computation' - is needed to address the issues considered in these papers. While this definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. The second in this pair of papers presents a preliminary exploration of some of this mathematical structure, including in particular that of prediction complexity, which is a 'physical computation

  18. Applications of FLUKA Monte Carlo code for nuclear and accelerator physics

    CERN Document Server

    Battistoni, Giuseppe; Brugger, Markus; Campanella, Mauro; Carboni, Massimo; Empl, Anton; Fasso, Alberto; Gadioli, Ettore; Cerutti, Francesco; Ferrari, Alfredo; Ferrari, Anna; Lantz, Matthias; Mairani, Andrea; Margiotta, M; Morone, Christina; Muraro, Silvia; Parodi, Katerina; Patera, Vincenzo; Pelliccioni, Maurizio; Pinsky, Lawrence; Ranft, Johannes; Roesler, Stefan; Rollet, Sofia; Sala, Paola R; Santana, Mario; Sarchiapone, Lucia; Sioli, Maximiliano; Smirnov, George; Sommerer, Florian; Theis, Christian; Trovati, Stefania; Villari, R; Vincke, Heinz; Vincke, Helmut; Vlachoudis, Vasilis; Vollaire, Joachim; Zapp, Neil

    2011-01-01

    FLUKA is a general purpose Monte Carlo code capable of handling all radiation components from thermal energies (for neutrons) or 1keV (for all other particles) to cosmic ray energies and can be applied in many different fields. Presently the code is maintained on Linux. The validity of the physical models implemented in FLUKA has been benchmarked against a variety of experimental data over a wide energy range, from accelerator data to cosmic ray showers in the Earth atmosphere. FLUKA is widely used for studies related both to basic research and to applications in particle accelerators, radiation protection and dosimetry, including the specific issue of radiation damage in space missions, radiobiology (including radiotherapy) and cosmic ray calculations. After a short description of the main features that make FLUKA valuable for these topics, the present paper summarizes some of the recent applications of the FLUKA Monte Carlo code in the nuclear as well high energy physics. In particular it addresses such top...

  19. Proton computed tomography from multiple physics processes

    Science.gov (United States)

    Bopp, C.; Colin, J.; Cussol, D.; Finck, Ch; Labalme, M.; Rousseau, M.; Brasse, D.

    2013-10-01

    Proton CT (pCT) nowadays aims at improving hadron therapy treatment planning by mapping the relative stopping power (RSP) of materials with respect to water. The RSP depends mainly on the electron density of the materials. The main information used is the energy of the protons. However, during a pCT acquisition, the spatial and angular deviation of each particle is recorded and the information about its transmission is implicitly available. The potential use of those observables in order to get information about the materials is being investigated. Monte Carlo simulations of protons sent into homogeneous materials were performed, and the influence of the chemical composition on the outputs was studied. A pCT acquisition of a head phantom scan was simulated. Brain lesions with the same electron density but different concentrations of oxygen were used to evaluate the different observables. Tomographic images from the different physics processes were reconstructed using a filtered back-projection algorithm. Preliminary results indicate that information is present in the reconstructed images of transmission and angular deviation that may help differentiate tissues. However, the statistical uncertainty on these observables generates further challenge in order to obtain an optimal reconstruction and extract the most pertinent information.

  20. From the web to the grid and beyond. Computing paradigms driven by high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Brun, Rene; Carminati, Federico [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Galli Carminati, Giuliana (eds.) [Hopitaux Universitaire de Geneve, Chene-Bourg (Switzerland). Unite de la Psychiatrie du Developpement Mental

    2012-07-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the Web, and grid- and cloud computing. The important issue of intellectual property regulations for distributed software engineering and computing is also addressed. Aptly, the book closes with a visionary chapter of what may lie ahead. Approachable and requiring only basic understanding of physics and computer sciences, this book is intended for both education and research. (orig.)

  1. Solutions manual and computer programs for physical and computational aspects of convective heat transfer

    CERN Document Server

    Cebeci, Tuncer

    1989-01-01

    This book is designed to accompany Physical and Computational Aspects of Convective Heat Transfer by T Cebeci and P Bradshaw and contains solutions to the exercises and computer programs for the numerical methods contained in that book Physical and Computational Aspects of Convective Heat Transfer begins with a thorough discussion of the physical aspects of convective heat transfer and presents in some detail the partial differential equations governing the transport of thermal energy in various types of flows The book is intended for senior undergraduate and graduate students of aeronautical, chemical, civil and mechanical engineering It can also serve as a reference for the practitioner

  2. Performance analysis and acceleration of explicit integration for large kinetic networks using batched GPU computations

    Energy Technology Data Exchange (ETDEWEB)

    Shyles, Daniel [University of Tennessee (UT); Dongarra, Jack J. [University of Tennessee, Knoxville (UTK); Guidry, Mike W. [ORNL; Tomov, Stanimire Z. [ORNL; Billings, Jay Jay [ORNL; Brock, Benjamin A. [ORNL; Haidar Ahmad, Azzam A. [ORNL

    2016-09-01

    Abstract—We demonstrate the systematic implementation of recently-developed fast explicit kinetic integration algorithms that solve efficiently N coupled ordinary differential equations (subject to initial conditions) on modern GPUs. We take representative test cases (Type Ia supernova explosions) and demonstrate two or more orders of magnitude increase in efficiency for solving such systems (of realistic thermonuclear networks coupled to fluid dynamics). This implies that important coupled, multiphysics problems in various scientific and technical disciplines that were intractable, or could be simulated only with highly schematic kinetic networks, are now computationally feasible. As examples of such applications we present the computational techniques developed for our ongoing deployment of these new methods on modern GPU accelerators. We show that similarly to many other scientific applications, ranging from national security to medical advances, the computation can be split into many independent computational tasks, each of relatively small-size. As the size of each individual task does not provide sufficient parallelism for the underlying hardware, especially for accelerators, these tasks must be computed concurrently as a single routine, that we call batched routine, in order to saturate the hardware with enough work.

  3. Computational physics of electric discharges in gas flows

    CERN Document Server

    Surzhikov, Sergey T

    2012-01-01

    Gas discharges are of interest for many processes in mechanics, manufacturing, materials science and aerophysics. To understand the physics behind the phenomena is of key importance for the effective use and development of gas discharge devices. This worktreats methods of computational modeling of electrodischarge processes and dynamics of partially ionized gases. These methods are necessary to tackleproblems of physical mechanics, physics of gas discharges and aerophysics.Particular attention is given to a solution of two-dimensional problems of physical mechanics of glow discharges.The use o

  4. On The Computational Capabilities of Physical Systems. Part 2; Relationship With Conventional Computer Science

    Science.gov (United States)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In the first of this pair of papers, it was proven that there cannot be a physical computer to which one can properly pose any and all computational tasks concerning the physical universe. It was then further proven that no physical computer C can correctly carry out all computational tasks that can be posed to C. As a particular example, this result means that no physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly "processing information faster than the universe does". These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - "physical computation" - is needed to address the issues considered in these papers, which concern real physical computers. While this novel definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. This second paper of the pair presents a preliminary exploration of some of this mathematical structure. Analogues of Chomskian results concerning universal Turing Machines and the Halting theorem are derived, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analogue of algorithmic information complexity, "prediction complexity", is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task

  5. Quantum computation and the physical computation level of biological information processing

    OpenAIRE

    Castagnoli, Giuseppe

    2009-01-01

    On the basis of introspective analysis, we establish a crucial requirement for the physical computation basis of consciousness: it should allow processing a significant amount of information together at the same time. Classical computation does not satisfy the requirement. At the fundamental physical level, it is a network of two body interactions, each the input-output transformation of a universal Boolean gate. Thus, it cannot process together at the same time more than the three bit input ...

  6. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.

    Science.gov (United States)

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-04-07

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  7. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing

    Directory of Open Access Journals (Sweden)

    Fan Zhang

    2016-04-01

    Full Text Available With the development of synthetic aperture radar (SAR technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO. However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  8. Computer Simulation Studies in Condensed-Matter Physics XVII

    Science.gov (United States)

    Landau, D. P.; Lewis, S. P.; Schüttler, H.-B.

    This status report features the most recent developments in the field, spanning a wide range of topical areas in the computer simulation of condensed matter/materials physics. Both established and new topics are included, ranging from the statistical mechanics of classical magnetic spin models to electronic structure calculations, quantum simulations, and simulations of soft condensed matter. The book presents new physical results as well as novel methods of simulation and data analysis. Highlights of this volume include various aspects of non-equilibrium statistical mechanics, studies of properties of real materials using both classical model simulations and electronic structure calculations, and the use of computer simulations in teaching.

  9. B-physics computations from Nf=2 tmQCD

    CERN Document Server

    Carrasco, N.; Dimopoulos, P.; Frezzotti, R.; Gimenez, V.; Herdoiza, G.; Lubicz, V.; Michael, C.; Picca, E.; Rossi, G.C.; Sanfilippo, F.; Shindler, A.; Silvestrini, L.; Simula, S.; Tarantino, C.

    2014-01-01

    We present an accurate lattice QCD computation of the b-quark mass, the B and Bs decay constants, the B-mixing bag-parameters for the full four-fermion operator basis, as well as estimates for \\xi and f_{Bq}\\sqrt{B_q} extrapolated to the continuum limit and the physical pion mass. We have used Nf = 2 dynamical quark gauge configurations at four values of the lattice spacing generated by ETMC. Extrapolation in the heavy quark mass from the charm to the bottom quark region has been carried out using ratios of physical quantities computed at nearby quark masses, having an exactly known infinite mass limit.

  10. PREFACE: 25th IUPAP Conference on Computational Physics (CCP2013)

    Science.gov (United States)

    Shchur, Lev N.; Barash, Lev Yu

    2014-05-01

    Participants of the XXV IUPAP Conference on Computational physics came to Moscow at the end of the August during a hot period. It was not a hot period because of the summer; in fact, the weather was quite comfortable. It was a hot period for the atmosphere amidst scientific society in Russia, especially for scientists working for the Russian Academy of Sciences. Four years ago, the C20 IUPAP Commission on Computational Physics and Computational Physics Group of the European Physical Society chose Moscow for several reasons. The first reason was connected to the high level and deep traditions of computational physics in Russia. It is known from experience at the former CCP conferences that native participants contribute about half of the presentations which form the solid scientific background of the conference, and the good level of domestic science makes the conference interesting and successful. The second reason was due to the fact that for the last twenty years there were not many IUPAP conferences in Russia, and it was a time to open more places for information exchange and intensify scientific collaboration. Thirdly, it was common opinion four years ago that the situation in Russia had become stable enough after the transition to a modern society, which took almost a quarter of a century. The conference preface is continued in the pdf.

  11. Providing a computing environment for a high energy physics workshop

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, C.; Butler, J.; Carter, T.; DeMar, P.; Fagan, D.; Gibbons, R.; Grigaliunas, V.; Haibeck, M.; Haring, P.; Horvath, C.; Hughart, N.; Johnstad, H.; Jones, S.; Kreymer, A.; LeBrun, P.; Lego, A.; Leninger, M.; Loebel, L.; McNamara, S.; Nguyen, T.; Nicholls, J.; O' Reilly, C.; Pabrai, U.; Pfister, J.; Ritchie, D.; Roberts, L.; Sazama, C.; Wohlt, D. (Fermi National Accelerator Lab., Batavia, IL (USA)); Carven, R. (Wiscons

    1989-12-01

    Although computing facilities have been provided at conferences and workshops remote from the host institution for some years, the equipment provided has rarely been capable of providing for much more than simple editing and electronic mail. This report documents the effort involved in providing a local computing facility with world-wide networking capability for a physics workshop so that we and others can benefit from the knowledge gained through the experience.

  12. Physics/computer science. Passing messages between disciplines.

    Science.gov (United States)

    Mézard, Marc

    2003-09-19

    Problems in computer science, such as error correction in information transfer and "satisfiability" in optimization, show phase transitions familiar from solid-state physics. In his Perspective, Mézard explains how recent advances in these three fields originate in similar "message passing" procedures. The exchange of elaborate messages between different variables and constraints, used in the study of phase transitions in physical systems, helps to make error correction and satisfiability codes more efficient.

  13. Computer architectures for computational physics work done by Computational Research and Technology Branch and Advanced Computational Concepts Group

    Science.gov (United States)

    1985-01-01

    Slides are reproduced that describe the importance of having high performance number crunching and graphics capability. They also indicate the types of research and development underway at Ames Research Center to ensure that, in the near term, Ames is a smart buyer and user, and in the long-term that Ames knows the best possible solutions for number crunching and graphics needs. The drivers for this research are real computational physics applications of interest to Ames and NASA. They are concerned with how to map the applications, and how to maximize the physics learned from the results of the calculations. The computer graphics activities are aimed at getting maximum information from the three-dimensional calculations by using the real time manipulation of three-dimensional data on the Silicon Graphics workstation. Work is underway on new algorithms that will permit the display of experimental results that are sparse and random, the same way that the dense and regular computed results are displayed.

  14. Particle accelerators, colliders, and the story of high energy physics. Charming the cosmic snake

    Energy Technology Data Exchange (ETDEWEB)

    Jayakumar, Raghavan

    2012-07-01

    The Nordic mythological Cosmic Serpent, Ouroboros, is said to be coiled in the depths of the sea, surrounding the Earth with its tail in its mouth. In physics, this snake is a metaphor for the Universe, where the head, symbolizing the largest entity - the Cosmos - is one with the tail, symbolizing the smallest - the fundamental particle. Particle accelerators, colliders and detectors are built by physicists and engineers to uncover the nature of the Universe while discovering its building blocks. ''Charming the Cosmic Snake'' takes the readers through the science behind these experimental machines: the physics principles that each stage of the development of particle accelerators helped to reveal, and the particles they helped to discover. The book culminates with a description of the Large Hadron Collider, one of the world's largest and most complex machines operating in a 27-km circumference tunnel near Geneva. That collider may prove or disprove many of our basic theories about the nature of matter. The book provides the material honestly without misrepresenting the science for the sake of excitement or glossing over difficult notions. The principles behind each type of accelerator is made accessible to the undergraduate student and even to a lay reader with cartoons, illustrations and metaphors. Simultaneously, the book also caters to different levels of reader's background and provides additional materials for the more interested or diligent reader. (orig.)

  15. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  16. BioEM: GPU-accelerated computing of Bayesian inference of electron microscopy images

    CERN Document Server

    Cossio, Pilar; Baruffa, Fabio; Rampp, Markus; Lindenstruth, Volker; Hummer, Gerhard

    2016-01-01

    In cryo-electron microscopy (EM), molecular structures are determined from large numbers of projection images of individual particles. To harness the full power of this single-molecule information, we use the Bayesian inference of EM (BioEM) formalism. By ranking structural models using posterior probabilities calculated for individual images, BioEM in principle addresses the challenge of working with highly dynamic or heterogeneous systems not easily handled in traditional EM reconstruction. However, the calculation of these posteriors for large numbers of particles and models is computationally demanding. Here we present highly parallelized, GPU-accelerated computer software that performs this task efficiently. Our flexible formulation employs CUDA, OpenMP, and MPI parallelization combined with both CPU and GPU computing. The resulting BioEM software scales nearly ideally both on pure CPU and on CPU+GPU architectures, thus enabling Bayesian analysis of tens of thousands of images in a reasonable time. The g...

  17. Effect of Physical Education Teachers' Computer Literacy on Technology Use in Physical Education

    Science.gov (United States)

    Kretschmann, Rolf

    2015-01-01

    Teachers' computer literacy has been identified as a factor that determines their technology use in class. The aim of this study was to investigate the relationship between physical education (PE) teachers' computer literacy and their technology use in PE. The study group consisted of 57 high school level in-service PE teachers. A survey was used…

  18. Challenges and opportunities for atomic physics at FAIR: The new GSI accelerator project

    Energy Technology Data Exchange (ETDEWEB)

    Hagmann, S. [Institut f. Kernphysik, University of Frankfurt (Germany) and GSI, Max Planckstr.1, Darmstadt (Germany)]. E-mail: s.hagmann@gsi.de; Beyer, H.F. [GSI, Max Planckstr.1, Darmstadt (Germany); Bosch, F. [GSI, Max Planckstr.1, Darmstadt (Germany); Braeuning-Demian, A. [GSI, Max Planckstr.1, Darmstadt (Germany); Kluge, H.-J. [GSI, Max Planckstr.1, Darmstadt (Germany); Kozhuharov, Ch. [GSI, Max Planckstr.1, Darmstadt (Germany); Kuehl, Th. [GSI, Max Planckstr.1, Darmstadt (Germany); Liesen, D. [GSI, Max Planckstr.1, Darmstadt (Germany); Stoehlker, Th. [GSI, Max Planckstr.1, Darmstadt (Germany); Ullrich, J. [Max Planck Inst. f. Kernphysik, Heidelberg (Germany); Moshammer, R. [Max Planck Inst. f. Kernphysik, Heidelberg (Germany); Mann, R. [GSI, Max Planckstr.1, Darmstadt (Germany); Mokler, P. [GSI, Max Planckstr.1, Darmstadt (Germany); Quint, W. [GSI, Max Planckstr.1, Darmstadt (Germany); Schuch, R. [Department of Physics, University of Stockholm (Sweden); Warczak, A. [Department of Physics, University of Cracow (Poland)

    2005-12-15

    We present a short overview of the current status of the new accelerator project FAIR at GSI with the new double synchrotron rings and the multi-storage rings. The key features of the new facility, which provides intense relativistic beams of stable and unstable nuclei, are introduced and their relation to the anticipated experimental programs in nuclear structure physics and antiproton physics is shown. The main emphasis in this overview is given to the atomic physics program with unique opportunities which will be provided e.g. by bare U{sup 92+} ions with kinetic energies continuously variable between relativistic energies corresponding to {gamma} up to {approx_equal}35 down to kinetic energies of such ions in traps corresponding to fractions of a Kelvin.

  19. Concepts and techniques: Active electronics and computers in safety-critical accelerator operation

    Energy Technology Data Exchange (ETDEWEB)

    Frankel, R.S.

    1995-12-31

    The Relativistic Heavy Ion Collider (RHIC) under construction at Brookhaven National Laboratory, requires an extensive Access Control System to protect personnel from Radiation, Oxygen Deficiency and Electrical hazards. In addition, the complicated nature of operation of the Collider as part of a complex of other Accelerators necessitates the use of active electronic measurement circuitry to ensure compliance with established Operational Safety Limits. Solutions were devised which permit the use of modern computer and interconnections technology for Safety-Critical applications, while preserving and enhancing, tried and proven protection methods. In addition a set of Guidelines, regarding required performance for Accelerator Safety Systems and a Handbook of design criteria and rules were developed to assist future system designers and to provide a framework for internal review and regulation.

  20. Accelerating Relevance-Vector-Machine-Based Classification of Hyperspectral Image with Parallel Computing

    Directory of Open Access Journals (Sweden)

    Chao Dong

    2012-01-01

    Full Text Available Benefiting from the kernel skill and the sparse property, the relevance vector machine (RVM could acquire a sparse solution, with an equivalent generalization ability compared with the support vector machine. The sparse property requires much less time in the prediction, making RVM potential in classifying the large-scale hyperspectral image. However, RVM is not widespread influenced by its slow training procedure. To solve the problem, the classification of the hyperspectral image using RVM is accelerated by the parallel computing technique in this paper. The parallelization is revealed from the aspects of the multiclass strategy, the ensemble of multiple weak classifiers, and the matrix operations. The parallel RVMs are implemented using the C language plus the parallel functions of the linear algebra packages and the message passing interface library. The proposed methods are evaluated by the AVIRIS Indian Pines data set on the Beowulf cluster and the multicore platforms. It shows that the parallel RVMs accelerate the training procedure obviously.

  1. Unobtrusive heart rate estimation during physical exercise using photoplethysmographic and acceleration data.

    Science.gov (United States)

    Mullan, Patrick; Kanzler, Christoph M; Lorch, Benedikt; Schroeder, Lea; Winkler, Ludwig; Laich, Larissa; Riedel, Frederik; Richer, Robert; Luckner, Christoph; Leutheuser, Heike; Eskofier, Bjoern M; Pasluosta, Cristian

    2015-01-01

    Photoplethysmography (PPG) is a non-invasive, inexpensive and unobtrusive method to achieve heart rate monitoring during physical exercises. Motion artifacts during exercise challenge the heart rate estimation from wrist-type PPG signals. This paper presents a methodology to overcome these limitation by incorporating acceleration information. The proposed algorithm consisted of four stages: (1) A wavelet based denoising, (2) an acceleration based denoising, (3) a frequency based approach to estimate the heart rate followed by (4) a postprocessing step. Experiments with different movement types such as running and rehabilitation exercises were used for algorithm design and development. Evaluation of our heart rate estimation showed that a mean absolute error 1.96 bpm (beats per minute) with standard deviation of 2.86 bpm and a correlation of 0.98 was achieved with our method. These findings suggest that the proposed methodology is robust to motion artifacts and is therefore applicable for heart rate monitoring during sports and rehabilitation.

  2. Computer vision uncovers predictors of physical urban change.

    Science.gov (United States)

    Naik, Nikhil; Kominers, Scott Duke; Raskar, Ramesh; Glaeser, Edward L; Hidalgo, César A

    2017-07-18

    Which neighborhoods experience physical improvements? In this paper, we introduce a computer vision method to measure changes in the physical appearances of neighborhoods from time-series street-level imagery. We connect changes in the physical appearance of five US cities with economic and demographic data and find three factors that predict neighborhood improvement. First, neighborhoods that are densely populated by college-educated adults are more likely to experience physical improvements-an observation that is compatible with the economic literature linking human capital and local success. Second, neighborhoods with better initial appearances experience, on average, larger positive improvements-an observation that is consistent with "tipping" theories of urban change. Third, neighborhood improvement correlates positively with physical proximity to the central business district and to other physically attractive neighborhoods-an observation that is consistent with the "invasion" theories of urban sociology. Together, our results provide support for three classical theories of urban change and illustrate the value of using computer vision methods and street-level imagery to understand the physical dynamics of cities.

  3. Why I think Computational Physics has been the most valuable part of my undergraduate physics education

    Science.gov (United States)

    Parsons, Matthew

    2015-04-01

    Computational physics is a rich and vibrant field in its own right, but often not given the attention that it should receive in the typical undergraduate physics curriculum. It appears that the partisan theorist vs. experimentalist view is still pervasive in academia, or at least still portrayed to students, while in fact there is a continuous spectrum of opportunities in between these two extremes. As a case study, I'll give my perspective as a graduating physics student with examples of computational coursework at Drexel University and research opportunities that this experience has led to.

  4. Future of computing technology in physics - the potentials and pitfalls

    Energy Technology Data Exchange (ETDEWEB)

    Brenner, A.E.

    1984-02-01

    The impact of the developments of modern digital computers is discussed, especially with respect to physics research in the future. The effects of large data processing capability and increasing rates at which data can be acquired and processed are considered. (GHT)

  5. Integrating Computational Chemistry into the Physical Chemistry Curriculum

    Science.gov (United States)

    Johnson, Lewis E.; Engel, Thomas

    2011-01-01

    Relatively few undergraduate physical chemistry programs integrate molecular modeling into their quantum mechanics curriculum owing to concerns about limited access to computational facilities, the cost of software, and concerns about increasing the course material. However, modeling exercises can be integrated into an undergraduate course at a…

  6. The Teaching of Computing in an Undergraduate Physics Course.

    Science.gov (United States)

    Humberston, J. W.; McKenzie, J.

    1984-01-01

    Describes an approach to teaching interactive computing for physics students beginning with the use of BASIC and video terminals during the first year of study (includes writing solution programs for practical problems). Second year students learn FORTRAN and apply it to interpolation, numerical integration, and differential equations. (JM)

  7. Integrating Computational Chemistry into the Physical Chemistry Curriculum

    Science.gov (United States)

    Johnson, Lewis E.; Engel, Thomas

    2011-01-01

    Relatively few undergraduate physical chemistry programs integrate molecular modeling into their quantum mechanics curriculum owing to concerns about limited access to computational facilities, the cost of software, and concerns about increasing the course material. However, modeling exercises can be integrated into an undergraduate course at a…

  8. Accelerating image registration of MRI by GPU-based parallel computation.

    Science.gov (United States)

    Huang, Teng-Yi; Tang, Yu-Wei; Ju, Shiun-Ying

    2011-06-01

    Automatic image registration for MRI applications generally requires many iteration loops and is, therefore, a time-consuming task. This drawback prolongs data analysis and delays the workflow of clinical routines. Recent advances in the massively parallel computation of graphic processing units (GPUs) may be a solution to this problem. This study proposes a method to accelerate registration calculations, especially for the popular statistical parametric mapping (SPM) system. This study reimplemented the image registration of SPM system to achieve an approximately 14-fold increase in speed in registering single-modality intrasubject data sets. The proposed program is fully compatible with SPM, allowing the user to simply replace the original image registration library of SPM to gain the benefit of the computation power provided by commodity graphic processors. In conclusion, the GPU computation method is a practical way to accelerate automatic image registration. This technology promises a broader scope of application in the field of image registration. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. PREFACE: IUPAP C20 Conference on Computational Physics (CCP 2011)

    Science.gov (United States)

    Troparevsky, Claudia; Stocks, George Malcolm

    2012-12-01

    Increasingly, computational physics stands alongside experiment and theory as an integral part of the modern approach to solving the great scientific challenges of the day on all scales - from cosmology and astrophysics, through climate science, to materials physics, and the fundamental structure of matter. Computational physics touches aspects of science and technology with direct relevance to our everyday lives, such as communication technologies and securing a clean and efficient energy future. This volume of Journal of Physics: Conference Series contains the proceedings of the scientific contributions presented at the 23rd Conference on Computational Physics held in Gatlinburg, Tennessee, USA, in November 2011. The annual Conferences on Computational Physics (CCP) are dedicated to presenting an overview of the most recent developments and opportunities in computational physics across a broad range of topical areas and from around the world. The CCP series has been in existence for more than 20 years, serving as a lively forum for computational physicists. The topics covered by this conference were: Materials/Condensed Matter Theory and Nanoscience, Strongly Correlated Systems and Quantum Phase Transitions, Quantum Chemistry and Atomic Physics, Quantum Chromodynamics, Astrophysics, Plasma Physics, Nuclear and High Energy Physics, Complex Systems: Chaos and Statistical Physics, Macroscopic Transport and Mesoscopic Methods, Biological Physics and Soft Materials, Supercomputing and Computational Physics Teaching, Computational Physics and Sustainable Energy. We would like to take this opportunity to thank our sponsors: International Union of Pure and Applied Physics (IUPAP), IUPAP Commission on Computational Physics (C20), American Physical Society Division of Computational Physics (APS-DCOMP), Oak Ridge National Laboratory (ORNL), Center for Defect Physics (CDP), the University of Tennessee (UT)/ORNL Joint Institute for Computational Sciences (JICS) and Cray, Inc

  10. Topics in radiation at accelerators: Radiation physics for personnel and environmental protection

    Energy Technology Data Exchange (ETDEWEB)

    Cossairt, J.D.

    1993-11-01

    This report discusses the following topics: Composition of Accelerator Radiation Fields; Shielding of Electrons and Photons at Accelerators; Shielding of Hadrons at Accelerators; Low Energy Prompt Radiation Phenomena; Induced Radioactivity at Accelerators; Topics in Radiation Protection Instrumentation at Accelerators; and Accelerator Radiation Protection Program Elements.

  11. Computational Materials Science and Chemistry: Accelerating Discovery and Innovation through Simulation-Based Engineering and Science

    Energy Technology Data Exchange (ETDEWEB)

    Crabtree, George [Argonne National Lab. (ANL), Argonne, IL (United States); Glotzer, Sharon [University of Michigan; McCurdy, Bill [University of California Davis; Roberto, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2010-07-26

    This report is based on a SC Workshop on Computational Materials Science and Chemistry for Innovation on July 26-27, 2010, to assess the potential of state-of-the-art computer simulations to accelerate understanding and discovery in materials science and chemistry, with a focus on potential impacts in energy technologies and innovation. The urgent demand for new energy technologies has greatly exceeded the capabilities of today's materials and chemical processes. To convert sunlight to fuel, efficiently store energy, or enable a new generation of energy production and utilization technologies requires the development of new materials and processes of unprecedented functionality and performance. New materials and processes are critical pacing elements for progress in advanced energy systems and virtually all industrial technologies. Over the past two decades, the United States has developed and deployed the world's most powerful collection of tools for the synthesis, processing, characterization, and simulation and modeling of materials and chemical systems at the nanoscale, dimensions of a few atoms to a few hundred atoms across. These tools, which include world-leading x-ray and neutron sources, nanoscale science facilities, and high-performance computers, provide an unprecedented view of the atomic-scale structure and dynamics of materials and the molecular-scale basis of chemical processes. For the first time in history, we are able to synthesize, characterize, and model materials and chemical behavior at the length scale where this behavior is controlled. This ability is transformational for the discovery process and, as a result, confers a significant competitive advantage. Perhaps the most spectacular increase in capability has been demonstrated in high performance computing. Over the past decade, computational power has increased by a factor of a million due to advances in hardware and software. This rate of improvement, which shows no sign of

  12. School of Analytic Computing in Theoretical High-Energy Physics

    CERN Document Server

    2013-01-01

    In recent years, a huge progress has been made on computing rates for production processes of direct relevance to experiments at the Large Hadron Collider (LHC). Crucial to that remarkable advance has been our understanding and ability to compute scattering amplitudes. The aim of the School is to bring together young theorists working on the phenomenology of LHC physics with those working in more formal areas, and to provide them the analytic tools to compute amplitudes in gauge theories. The school is addressed to Ph.D. students and post-docs in Theoretical High-Energy Physics. 30 hours of lectures will be delivered over the 5 days of the School. A Poster Session will be held, at which students are welcome to present their research topics.

  13. School of Analytic Computing in Theoretical High-Energy Physics

    CERN Document Server

    2015-01-01

    In recent years, a huge progress has been made on computing rates for production processes of direct relevance to experiments at the Large Hadron Collider (LHC). Crucial to that remarkable advance has been our understanding and ability to compute scattering amplitudes and cross sections. The aim of the School is to bring together young theorists working on the phenomenology of LHC physics with those working in more formal areas, and to provide them the analytic tools to compute amplitudes in gauge theories. The school is addressed to Ph.D. students and post-docs in Theoretical High-Energy Physics. 30 hours of lectures and 4 hours of tutorials will be delivered over the 6 days of the School.

  14. plasmaFoam: An OpenFOAM framework for computational plasma physics and chemistry

    Science.gov (United States)

    Venkattraman, Ayyaswamy; Verma, Abhishek Kumar

    2016-09-01

    As emphasized in the 2012 Roadmap for low temperature plasmas (LTP), scientific computing has emerged as an essential tool for the investigation and prediction of the fundamental physical and chemical processes associated with these systems. While several in-house and commercial codes exist, with each having its own advantages and disadvantages, a common framework that can be developed by researchers from all over the world will likely accelerate the impact of computational studies on advances in low-temperature plasma physics and chemistry. In this regard, we present a finite volume computational toolbox to perform high-fidelity simulations of LTP systems. This framework, primarily based on the OpenFOAM solver suite, allows us to enhance our understanding of multiscale plasma phenomenon by performing massively parallel, three-dimensional simulations on unstructured meshes using well-established high performance computing tools that are widely used in the computational fluid dynamics community. In this talk, we will present preliminary results obtained using the OpenFOAM-based solver suite with benchmark three-dimensional simulations of microplasma devices including both dielectric and plasma regions. We will also discuss the future outlook for the solver suite.

  15. An introduction to computer simulation methods applications to physical systems

    CERN Document Server

    Gould, Harvey; Christian, Wolfgang

    2007-01-01

    Now in its third edition, this book teaches physical concepts using computer simulations. The text incorporates object-oriented programming techniques and encourages readers to develop good programming habits in the context of doing physics. Designed for readers at all levels , An Introduction to Computer Simulation Methods uses Java, currently the most popular programming language. Introduction, Tools for Doing Simulations, Simulating Particle Motion, Oscillatory Systems, Few-Body Problems: The Motion of the Planets, The Chaotic Motion of Dynamical Systems, Random Processes, The Dynamics of Many Particle Systems, Normal Modes and Waves, Electrodynamics, Numerical and Monte Carlo Methods, Percolation, Fractals and Kinetic Growth Models, Complex Systems, Monte Carlo Simulations of Thermal Systems, Quantum Systems, Visualization and Rigid Body Dynamics, Seeing in Special and General Relativity, Epilogue: The Unity of Physics For all readers interested in developing programming habits in the context of doing phy...

  16. Physics and engineering design of the accelerator and electron dump for SPIDER

    Science.gov (United States)

    Agostinetti, P.; Antoni, V.; Cavenago, M.; Chitarin, G.; Marconato, N.; Marcuzzi, D.; Pilan, N.; Serianni, G.; Sonato, P.; Veltri, P.; Zaccaria, P.

    2011-06-01

    The ITER Neutral Beam Test Facility (PRIMA) is planned to be built at Consorzio RFX (Padova, Italy). PRIMA includes two experimental devices: a full size ion source with low voltage extraction called SPIDER and a full size neutral beam injector at full beam power called MITICA. SPIDER is the first experimental device to be built and operated, aiming at testing the extraction of a negative ion beam (made of H- and in a later stage D- ions) from an ITER size ion source. The main requirements of this experiment are a H-/D- extracted current density larger than 355/285 A m-2, an energy of 100 keV and a pulse duration of up to 3600 s. Several analytical and numerical codes have been used for the design optimization process, some of which are commercial codes, while some others were developed ad hoc. The codes are used to simulate the electrical fields (SLACCAD, BYPO, OPERA), the magnetic fields (OPERA, ANSYS, COMSOL, PERMAG), the beam aiming (OPERA, IRES), the pressure inside the accelerator (CONDUCT, STRIP), the stripping reactions and transmitted/dumped power (EAMCC), the operating temperature, stress and deformations (ALIGN, ANSYS) and the heat loads on the electron dump (ED) (EDAC, BACKSCAT). An integrated approach, taking into consideration at the same time physics and engineering aspects, has been adopted all along the design process. Particular care has been taken in investigating the many interactions between physics and engineering aspects of the experiment. According to the 'robust design' philosophy, a comprehensive set of sensitivity analyses was performed, in order to investigate the influence of the design choices on the most relevant operating parameters. The design of the SPIDER accelerator, here described, has been developed in order to satisfy with reasonable margin all the requirements given by ITER, from the physics and engineering points of view. In particular, a new approach to the compensation of unwanted beam deflections inside the accelerator

  17. Parallelisation of PyHEADTAIL, a Collective Beam Dynamics Code for Particle Accelerator Physics

    CERN Document Server

    Oeftiger, Adrian

    2016-01-01

    The longitudinal tracking engine of the particle accelerator simulation application PyHEADTAIL shows a heavy potential for parallelisation. For basic beam circulation, the tracking functionality with the leap-frog algorithm is extracted and compared between a sequential C and a concurrent CUDA C API implementation for 1 million revolutions. Including the sequential data I/O in both versions, a pure speedup of up to S = 100 is observed which is in the order of magnitude of what is expected from Amdahl's law. From O(100) macro-particles on the overhead of initialising the GPU CUDA device appears outweighed by the concurrent computations on the 448 available CUDA cores.

  18. Computational Materials Science and Chemistry: Accelerating Discovery and Innovation through Simulation-Based Engineering and Science

    Energy Technology Data Exchange (ETDEWEB)

    Crabtree, George [Argonne National Lab. (ANL), Argonne, IL (United States); Glotzer, Sharon [University of Michigan; McCurdy, Bill [University of California Davis; Roberto, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2010-07-26

    This report is based on a SC Workshop on Computational Materials Science and Chemistry for Innovation on July 26-27, 2010, to assess the potential of state-of-the-art computer simulations to accelerate understanding and discovery in materials science and chemistry, with a focus on potential impacts in energy technologies and innovation. The urgent demand for new energy technologies has greatly exceeded the capabilities of today's materials and chemical processes. To convert sunlight to fuel, efficiently store energy, or enable a new generation of energy production and utilization technologies requires the development of new materials and processes of unprecedented functionality and performance. New materials and processes are critical pacing elements for progress in advanced energy systems and virtually all industrial technologies. Over the past two decades, the United States has developed and deployed the world's most powerful collection of tools for the synthesis, processing, characterization, and simulation and modeling of materials and chemical systems at the nanoscale, dimensions of a few atoms to a few hundred atoms across. These tools, which include world-leading x-ray and neutron sources, nanoscale science facilities, and high-performance computers, provide an unprecedented view of the atomic-scale structure and dynamics of materials and the molecular-scale basis of chemical processes. For the first time in history, we are able to synthesize, characterize, and model materials and chemical behavior at the length scale where this behavior is controlled. This ability is transformational for the discovery process and, as a result, confers a significant competitive advantage. Perhaps the most spectacular increase in capability has been demonstrated in high performance computing. Over the past decade, computational power has increased by a factor of a million due to advances in hardware and software. This rate of improvement, which shows no sign of

  19. Fundamental Physical Processes in Coronae: Waves, Turbulence, Reconnection, and Particle Acceleration

    CERN Document Server

    Aschwanden, Markus J

    2007-01-01

    Our understanding of fundamental processes in the solar corona has been greatly progressed based on the space observations of SMM, Yohkoh, Compton GRO, SOHO, TRACE, RHESSI, and STEREO. We observe now acoustic waves, MHD oscillations, turbulence-related line broadening, magnetic configurations related to reconnection processes, and radiation from high-energy particles on a routine basis. We review a number of key observations in EUV, soft X-rays, and hard X-rays that innovated our physical understanding of the solar corona, in terms of hydrodynamics, MHD, plasma heating, and particle acceleration processes.

  20. Promoting Physical Activity through Hand-Held Computer Technology

    Science.gov (United States)

    King, Abby C.; Ahn, David K.; Oliveira, Brian M.; Atienza, Audie A.; Castro, Cynthia M.; Gardner, Christopher D.

    2009-01-01

    Background Efforts to achieve population-wide increases in walking and similar moderate-intensity physical activities potentially can be enhanced through relevant applications of state-of-the-art interactive communication technologies. Yet few systematic efforts to evaluate the efficacy of hand-held computers and similar devices for enhancing physical activity levels have occurred. The purpose of this first-generation study was to evaluate the efficacy of a hand-held computer (i.e., personal digital assistant [PDA]) for increasing moderate intensity or more vigorous (MOD+) physical activity levels over 8 weeks in mid-life and older adults relative to a standard information control arm. Design Randomized, controlled 8-week experiment. Data were collected in 2005 and analyzed in 2006-2007. Setting/Participants Community-based study of 37 healthy, initially underactive adults aged 50 years and older who were randomized and completed the 8-week study (intervention=19, control=18). Intervention Participants received an instructional session and a PDA programmed to monitor their physical activity levels twice per day and provide daily and weekly individualized feedback, goal setting, and support. Controls received standard, age-appropriate written physical activity educational materials. Main Outcome Measure Physical activity was assessed via the Community Healthy Activities Model Program for Seniors (CHAMPS) questionnaire at baseline and 8 weeks. Results Relative to controls, intervention participants reported significantly greater 8-week mean estimated caloric expenditure levels and minutes per week in MOD+ activity (p<0.04). Satisfaction with the PDA was reasonably high in this largely PDA-naive sample. Conclusions Results from this first-generation study indicate that hand-held computers may be effective tools for increasing initial physical activity levels among underactive adults. PMID:18201644

  1. Promoting physical activity through hand-held computer technology.

    Science.gov (United States)

    King, Abby C; Ahn, David K; Oliveira, Brian M; Atienza, Audie A; Castro, Cynthia M; Gardner, Christopher D

    2008-02-01

    Efforts to achieve population-wide increases in walking and similar moderate-intensity physical activities potentially can be enhanced through relevant applications of state-of-the-art interactive communication technologies. Yet few systematic efforts to evaluate the efficacy of hand-held computers and similar devices for enhancing physical activity levels have occurred. The purpose of this first-generation study was to evaluate the efficacy of a hand-held computer (i.e., personal digital assistant [PDA]) for increasing moderate intensity or more vigorous (MOD+) physical activity levels over 8 weeks in mid-life and older adults relative to a standard information control arm. Randomized, controlled 8-week experiment. Data were collected in 2005 and analyzed in 2006-2007. Community-based study of 37 healthy, initially underactive adults aged 50 years and older who were randomized and completed the 8-week study (intervention=19, control=18). Participants received an instructional session and a PDA programmed to monitor their physical activity levels twice per day and provide daily and weekly individualized feedback, goal setting, and support. Controls received standard, age-appropriate written physical activity educational materials. Physical activity was assessed via the Community Healthy Activities Model Program for Seniors (CHAMPS) questionnaire at baseline and 8 weeks. Relative to controls, intervention participants reported significantly greater 8-week mean estimated caloric expenditure levels and minutes per week in MOD+ activity (pfirst-generation study indicate that hand-held computers may be effective tools for increasing initial physical activity levels among underactive adults.

  2. Operational radiation protection in high-energy physics accelerators: implementation of ALARA in design and operation of accelerators.

    Science.gov (United States)

    Fassò, A; Rokni, S

    2009-11-01

    This paper considers the historical evolution of the concept of optimisation of radiation exposures, as commonly expressed by the acronym ALARA, and discusses its application to various aspects of radiation protection at high-energy accelerators.

  3. Proton-driven plasma wakefield acceleration: a path to the future of high-energy particle physics

    CERN Document Server

    Assmann, R; Bohl, T; Bracco, C; Buttenschön, B; Butterworth, A; Caldwell, A; Chattopadhyay, S; Cipiccia, S; Feldbaumer, E; Fonseca, R A; Goddard, B; Gross, M; Grulke, O; Gschwendtner, E; Holloway, J; Huang, C; Jaroszynski, D; Jolly, S; Kempkes, P; Lopes, N; Lotov, K; Machacek, J; Mandry, S R; McKenzie, J W; Meddahi, M; Militsyn, B L; Moschuering, N; Muggli, P; Najmudin, Z; Noakes, T C Q; Norreys, P A; Öz, E; Pardons, A; Petrenko, A; Pukhov, A; Rieger, K; Reimann, O; Ruhl, H; Shaposhnikova, E; Silva, L O; Sosedkin, A; Tarkeshian, R; Trines, R M G N; Tückmantel, T; Vieira, J; Vincke, H; Wing, M; Xia G , G

    2014-01-01

    New acceleration technology is mandatory for the future elucidation of fundamental particles and their interactions. A promising approach is to exploit the properties of plasmas. Past research has focused on creating large-amplitude plasma waves by injecting an intense laser pulse or an electron bunch into the plasma. However, the maximum energy gain of electrons accelerated in a single plasma stage is limited by the energy of the driver. Proton bunches are the most promising drivers of wakefields to accelerate electrons to the TeV energy scale in a single stage. An experimental program at CERN { the AWAKE experiment { has been launched to study in detail the important physical processes and to demonstrate the power of proton-driven plasma wakefield acceleration. Here we review the physical principles and some experimental considerations for a future proton-driven plasma wakefield accelerator.

  4. Proton-driven plasma wakefield acceleration: a path to the future of high-energy particle physics

    CERN Document Server

    Assmann, R; Bohl, T; Bracco, C; Buttenschon, B; Butterworth, A; Caldwell, A; Chattopadhyay, S; Cipiccia, S; Feldbaumer, E; Fonseca, R A; Goddard, B; Gross, M; Grulke, O; Gschwendtner, E; Holloway, J; Huang, C; Jaroszynski, D; Jolly, S; Kempkes, P; Lopes, N; Lotov, K; Machacek, J; Mandry, S R; McKenzie, J W; Meddahi, M; Militsyn, B L; Moschuering, N; Muggli, P; Najmudin, Z; Noakes, T C Q; Norreys, P A; Oz, E; Pardons, A; Petrenko, A; Pukhov, A; Rieger, K; Reimann, O; Ruhl, H; Shaposhnikova, E; Silva, L O; Sosedkin, A; Tarkeshian, R; Trines, R M G N; Tuckmantel, T; Vieira, J; Vincke, H; Wing, M; Xia, G

    2014-01-01

    New acceleration technology is mandatory for the future elucidation of fundamental particles and their interactions. A promising approach is to exploit the properties of plasmas. Past research has focused on creating large-amplitude plasma waves by injecting an intense laser pulse or an electron bunch into the plasma. However, the maximum energy gain of electrons accelerated in a single plasma stage is limited by the energy of the driver. Proton bunches are the most promising drivers of wakefields to accelerate electrons to the TeV energy scale in a single stage. An experimental program at CERN -- the AWAKE experiment -- has been launched to study in detail the important physical processes and to demonstrate the power of proton-driven plasma wakefield acceleration. Here we review the physical principles and some experimental considerations for a future proton-driven plasma wakefield accelerator.

  5. Status of MAPA (Modular Accelerator Physics Analysis) and the Tech-X Object-Oriented Accelerator Library

    Science.gov (United States)

    Cary, J. R.; Shasharina, S.; Bruhwiler, D. L.

    1998-04-01

    The MAPA code is a fully interactive accelerator modeling and design tool consisting of a GUI and two object-oriented C++ libraries: a general library suitable for treatment of any dynamical system, and an accelerator library including many element types plus an accelerator class. The accelerator library inherits directly from the system library, which uses hash tables to store any relevant parameters or strings. The GUI can access these hash tables in a general way, allowing the user to invoke a window displaying all relevant parameters for a particular element type or for the accelerator class, with the option to change those parameters. The system library can advance an arbitrary number of dynamical variables through an arbitrary mapping. The accelerator class inherits this capability and overloads the relevant functions to advance the phase space variables of a charged particle through a string of elements. Among other things, the GUI makes phase space plots and finds fixed points of the map. We discuss the object hierarchy of the two libraries and use of the code.

  6. Nuclear Physics Programs for the Future Rare Isotope Beams Accelerator Facility in Korea

    CERN Document Server

    Moon, Chang-Bum

    2016-01-01

    We present nuclear physics programs based on the planned experiments using rare isotope beams (RIBs) for the future Korean Rare Isotope Beams Accelerator facility; RAON. This ambitious facility has both an Isotope Separation On Line (ISOL) and fragmentation capability for producing RIBs and accelerating beams of wide range mass of nuclides with energies of a few to hundreds MeV per nucleon. Low energy RIBs at Elab = 5 to 20 MeV per nucleon are for the study of nuclear structure and nuclear astrophysics toward and beyond the drip lines while higher energy RIBs produced by in-flight fragmentation with the re-accelerated ions from the ISOL enable to explore the neutron drip lines in intermediate mass regions. The planned programs have goals for investigating nuclear structures of the exotic nuclei toward and beyond the nucleon drip lines by addressing the following issues: how the shell structure evolves in areas of extreme proton to neutron imbalance; whether the isospin symmetry maintains in isobaric mirror nu...

  7. Semiempirical Quantum Chemical Calculations Accelerated on a Hybrid Multicore CPU-GPU Computing Platform.

    Science.gov (United States)

    Wu, Xin; Koslowski, Axel; Thiel, Walter

    2012-07-10

    In this work, we demonstrate that semiempirical quantum chemical calculations can be accelerated significantly by leveraging the graphics processing unit (GPU) as a coprocessor on a hybrid multicore CPU-GPU computing platform. Semiempirical calculations using the MNDO, AM1, PM3, OM1, OM2, and OM3 model Hamiltonians were systematically profiled for three types of test systems (fullerenes, water clusters, and solvated crambin) to identify the most time-consuming sections of the code. The corresponding routines were ported to the GPU and optimized employing both existing library functions and a GPU kernel that carries out a sequence of noniterative Jacobi transformations during pseudodiagonalization. The overall computation times for single-point energy calculations and geometry optimizations of large molecules were reduced by one order of magnitude for all methods, as compared to runs on a single CPU core.

  8. A Unified Algorithm for Accelerating Edit-Distance Computation via Text-Compression

    CERN Document Server

    Hermelin, Danny; Landau, Shir; Weimann, Oren

    2009-01-01

    We present a unified framework for accelerating edit-distance computation between two compressible strings using straight-line programs. For two strings of total length $N$ having straight-line program representations of total size $n$, we provide an algorithm running in $O(n^{1.4}N^{1.2})$ time for computing the edit-distance of these two strings under any rational scoring function, and an $O(n^{1.34}N^{1.34})$ time algorithm for arbitrary scoring functions. This improves on a recent algorithm of Tiskin that runs in $O(nN^{1.5})$ time, and works only for rational scoring functions. Also, in the last part of the paper, we show how the classical four-russians technique can be incorporated into our SLP edit-distance scheme, giving us a simple $\\Omega(\\lg N)$ speed-up in the case of arbitrary scoring functions, for any pair of strings.

  9. Computationally efficient methods for modelling laser wakefield acceleration in the blowout regime

    CERN Document Server

    Cowan, B M; Beck, A; Davoine, X; Bunkers, K; Lifschitz, A F; Lefebvre, E; Bruhwiler, D L; Shadwick, B A; Umstadter, D P

    2012-01-01

    Electron self-injection and acceleration until dephasing in the blowout regime is studied for a set of initial conditions typical of recent experiments with 100 terawatt-class lasers. Two different approaches to computationally efficient, fully explicit, three-dimensional particle-in-cell modelling are examined. First, the Cartesian code VORPAL using a perfect-dispersion electromagnetic solver precisely describes the laser pulse and bubble dynamics, taking advantage of coarser resolution in the propagation direction, with a proportionally larger time step. Using third-order splines for macroparticles helps suppress the sampling noise while keeping the usage of computational resources modest. The second way to reduce the simulation load is using reduced-geometry codes. In our case, the quasi-cylindrical code CALDER-CIRC uses decomposition of fields and currents into a set of poloidal modes, while the macroparticles move in the Cartesian 3D space. Cylindrical symmetry of the interaction allows using just two mo...

  10. Physics and engineering studies on the MITICA accelerator: comparison among possible design solutions

    Science.gov (United States)

    Agostinetti, P.; Antoni, V.; Cavenago, M.; Chitarin, G.; Pilan, N.; Marcuzzi, D.; Serianni, G.; Veltri, P.

    2011-09-01

    Consorzio RFX in Padova is currently using a comprehensive set of numerical and analytical codes, for the physics and engineering design of the SPIDER (Source for Production of Ion of Deuterium Extracted from RF plasma) and MITICA (Megavolt ITER Injector Concept Advancement) experiments, planned to be built at Consorzio RFX. This paper presents a set of studies on different possible geometries for the MITICA accelerator, with the objective to compare different design concepts and choose the most suitable one (or ones) to be further developed and possibly adopted in the experiment. Different design solutions have been discussed and compared, taking into account their advantages and drawbacks by both the physics and engineering points of view.

  11. IOTA (Integrable Optics Test Accelerator): facility and experimental beam physics program

    Energy Technology Data Exchange (ETDEWEB)

    Antipov, S.; Broemmelsiek, D.; Bruhwiler, D.; Edstrom, D.; Harms, E.; Lebedev, V.; Leibfritz, J.; Nagaitsev, S.; Park, C. S.; Piekarz, H.; Piot, P.; Prebys, E.; Romanov, A.; Ruan, J.; Sen, T.; Stancari, G.; Thangaraj, C.; Thurman-Keup, R.; Valishev, A.; Shiltsev, V.

    2017-03-01

    The Integrable Optics Test Accelerator (IOTA) is a storage ring for advanced beam physics research currently being built and commissioned at Fermilab. It will operate with protons and electrons using injectors with momenta of 70 and 150 MeV/c, respectively. The research program includes the study of nonlinear focusing integrable optical beam lattices based on special magnets and electron lenses, beam dynamics of space-charge effects and their compensation, optical stochastic cooling, and several other experiments. In this article, we present the design and main parameters of the facility, outline progress to date and provide the timeline of the construction, commissioning and research. The physical principles, design, and hardware implementation plans for the major IOTA experiments are also discussed.

  12. Quantum mechanics, high energy physics and accelerators selected papers of John S Bell (with commentary)

    CERN Document Server

    Bell, John Stewart; Gottfried, Kurt; Veltman, Martinus J G

    1994-01-01

    The scientific career of John Stewart Bell was distinguished by its breadth and its quality. He made several very important contributions to scientific fields as diverse as accelerator physics, high energy physics and the foundations of quantum mechanics. This book contains a large part of J S Bell's publications, including those that are recognized as his most important achievements, as well as others that are for no good reason less well known. The selection was made by Mary Bell, Martinus Veltman and Kurt Gottfried, all of whom were involved with John Bell both personally and professionally throughout a large part of his life. An introductory chapter has been written to help place the selected papers in a historical context and to review their significance. This book comprises an impressive collection of outstanding scientific work of one of the greatest scientists of the recent past, and it will remain important and influential for a long time to come.

  13. Radiosurgery with photon beams; Physical aspects and adequacy of linear accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Podgorsak, E.B.; Pla, M.; Souhami, L. (McGill University, Montreal (Canada). Department of Radiation Oncology); Pike, G.B.; Olivier, A. (McGill University, Montreal (Canada). Department of Neurosurgery)

    1990-03-01

    The question of the adequacy of isocentric linear accelerators (linacs) for use in radiosurgery is addressed. The general physical requirements for radiosurgery, mainly a high spatial and numerical accuracy of dose delivery, reasonable treatment time, and low skin and leakage dose as well as cost considerations are examined. Various linac-based procedures are analyzed in view of their ability to meet these requirements and are contrasted with the clinically proven system of the Gamma unit. It is shown that the linac-based multiple converging arcs techniques and the dynamic rotation meet the stringent physical requirements on dose delivery and are thus viable alternatives to radiosurgery with the commercially available and dedicated Gamma unit. (author). 15 refs.; 2 figs.; 1 tab.

  14. Beta Beams: an accelerator based facility to explore Neutrino oscillation physics

    CERN Document Server

    Wildner, E; Hansen, C; De Melo Mendonca, T; Stora, T; Payet, J; Chance, A; Zorin, V; Izotov, I; Rasin, S; Sidorov, A; Skalyga, V; De Angelis, G; Prete, G; Cinausero, M; Kravchuk, VL; Gramegna, F; Marchi, T; Collazuol, G; De Rosa, G; Delbar, T; Loiselet, M; Keutgen, T; Mitrofanov, S; Lamy, T; Latrasse, L; Marie-Jeanne, M; Sortais, P; Thuillier, T; Debray, F; Trophime, C; Hass, M; Hirsh, T; Berkovits, D; Stahl, A

    2011-01-01

    The discovery that the neutrino changes flavor as it travels through space has implications for the Standard Model of particle physics (SM)[1]. To know the contribution of neutrinos to the SM, needs precise measurements of the parameters governing the neutrino oscillations. This will require a high intensity beam-based neutrino oscillation facility. The EURONu Design Study will review three currently accepted methods of realizing this facility (the so-called Super-Beams, Beta Beams and Neutrino Factories) and perform a cost assessment that, coupled with the physics performance, will give means to the European research authorities to make a decision on the layout and construction of the future European neutrino oscillation facility. ”Beta Beams” produce collimated pure electron neutrino and antineutrino beams by accelerating beta active ions to high energies and letting them decay in a race-track shaped storage ring. EURONu Beta Beams are based on CERNs infrastructure and the fact that some of the already ...

  15. Physical Mechanism of the Transverse Instability in Radiation Pressure Ion Acceleration

    Science.gov (United States)

    Wan, Y.; Pai, C.-H.; Zhang, C. J.; Li, F.; Wu, Y. P.; Hua, J. F.; Lu, W.; Gu, Y. Q.; Silva, L. O.; Joshi, C.; Mori, W. B.

    2016-12-01

    The transverse stability of the target is crucial for obtaining high quality ion beams using the laser radiation pressure acceleration (RPA) mechanism. In this Letter, a theoretical model and supporting two-dimensional (2D) particle-in-cell (PIC) simulations are presented to clarify the physical mechanism of the transverse instability observed in the RPA process. It is shown that the density ripples of the target foil are mainly induced by the coupling between the transverse oscillating electrons and the quasistatic ions, a mechanism similar to the oscillating two stream instability in the inertial confinement fusion research. The predictions of the mode structure and the growth rates from the theory agree well with the results obtained from the PIC simulations in various regimes, indicating the model contains the essence of the underlying physics of the transverse breakup of the target.

  16. Physical mechanism of the transverse instability in radiation pressure ion acceleration

    CERN Document Server

    Wan, Y; Zhang, C J; Li, F; Wu, Y P; Hua, J F; Lu, W; Gu, Y Q; Silva, L O; Joshi, C; Mori, W B

    2016-01-01

    The transverse stability of the target is crucial for obtaining high quality ion beams using the laser radiation pressure acceleration (RPA) mechanism. In this letter, a theoretical model and supporting two-dimensional (2D) Particle-in-Cell (PIC) simulations are presented to clarify the physical mechanism of the transverse instability observed in the RPA process. It is shown that the density ripples of the target foil are mainly induced by the coupling between the transverse oscillating electrons and the quasi-static ions, a mechanism similar to the transverse two stream instability in the inertial confinement fusion (ICF) research. The predictions of the mode structure and the growth rates from the theory agree well with the results obtained from the PIC simulations in various regimes, indicating the model contains the essence of the underlying physics of the transverse break-up of the target.

  17. Historic Seismicity, Computed Peak Ground Accelerations, and Seismic Site Conditions for Northeast Mexico

    Science.gov (United States)

    Montalvo-Arriet, J. C.; Galván-Ramírez, I. N.; Ramos-Zuñiga, L. G.; Navarro de León, I.; Ramírez-Fernández, J. A.; Quintanilla-López, Y.; Cavazos-Tovar, N. P.

    2007-05-01

    In this study we present the historic seismicity, computed peak ground accelerations, and mapping of seismic site conditions for northeast Mexico. We start with a compilation of the regional seismicity in northeast Mexico (24- 31°N, 87-106°W) for the 1787-2006 period. Our study area lies within three morphotectonic provinces: Basin and Range and Rio Grande rift, Sierra Madre Oriental and Gulf Coastal Plain. Peak ground acceleration (PGA) maps were computed for three different scenarios: 1928 Parral, Chihuahua (MW = 6.5); 1931 Valentine, Texas (MW = 6.4); and a hypothetical earthquake located in central Coahuila (MW = 6.5). Ground acceleration values were computed using attenuation relations developed for central and eastern North America and the Basin and Range province. The hypothetical earthquake in central Coahuila is considered a critical scenario for the main cities of northeast Mexico. The damage associated with this hypothetical earthquake could be severe because the majority of the buildings were constructed without allowance for seismic accelerations. The expected PGA values in Monterrey, Saltillo and Monclova range from 30 to 70 cm/s2 (0.03 to 0.07g). This earthquake might also produce or trigger significant landslides and rock falls in the Sierra Madre Oriental, where several cities are located (e.g. suburbs of Monterrey). Additionally, the Vs30 distribution for the state of Nuevo Leon and the cities of Linares and Monterrey are presented. The Vs30 data was obtained using seismic refraction profiling correlated with borehole information. According to NEHRP soil classification, sites classes A, B and C are dominant. Sites with class D occupy minor areas in both cities. Due to the semi-arid conditions in northeast Mexico, we obtained the highest values of Vs30 in Quaternary deposits (alluvium) cemented by caliche. Similar values of Vs30 were obtained in Reno and Las Vegas, Nevada. This work constitutes the first attempt at understanding and

  18. Exascale computing and what it means for shock physics

    Science.gov (United States)

    Germann, Timothy

    2015-06-01

    The U.S. Department of Energy is preparing to launch an Exascale Computing Initiative, to address the myriad challenges required to deploy and effectively utilize an exascale-class supercomputer (i.e., one capable of performing 1018 operations per second) in the 2023 timeframe. Since physical (power dissipation) requirements limit clock rates to at most a few GHz, this will necessitate the coordination of on the order of a billion concurrent operations, requiring sophisticated system and application software, and underlying mathematical algorithms, that may differ radically from traditional approaches. Even at the smaller workstation or cluster level of computation, the massive concurrency and heterogeneity within each processor will impact computational scientists. Through the multi-institutional, multi-disciplinary Exascale Co-design Center for Materials in Extreme Environments (ExMatEx), we have initiated an early and deep collaboration between domain (computational materials) scientists, applied mathematicians, computer scientists, and hardware architects, in order to establish the relationships between algorithms, software stacks, and architectures needed to enable exascale-ready materials science application codes within the next decade. In my talk, I will discuss these challenges, and what it will mean for exascale-era electronic structure, molecular dynamics, and engineering-scale simulations of shock-compressed condensed matter. In particular, we anticipate that the emerging hierarchical, heterogeneous architectures can be exploited to achieve higher physical fidelity simulations using adaptive physics refinement. This work is supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research.

  19. Integrating computers in physics teaching: An Indian perspective

    Science.gov (United States)

    Jolly, Pratibha

    1997-03-01

    The University of Delhi has around twenty affiliated undergraduate colleges that offer a three-year physics major program to nearly five hundred students. All follow a common curriculum and submit to a centralized examination. This structure of tertiary education makes it relatively difficult to implement radical or rapid changes in the formal curriculum. The technology onslaught has, at last, irrevocably altered this; computers are carving new windows in old citadels and defining the agenda in teaching-learning environments the world over. In 1992, we formally introduced Computational Physics as a core paper in the second year of the Bachelor's program. As yet, the emphasis is on imparting familiarity with computers, a programming language and rudiments of numerical algorithms. In a parallel development, we also introduced a strong component of instrumentation with modern day electronic devices, including microprocessors. Many of us, however, would like to see not just computer presence in our curriculum but a totally new curriculum and teaching strategy that exploits, befittingly, the new technology. The current challenge is to realize in practice the full potential of the computer as the proverbial versatile tool: interfacing laboratory experiments for real-time acquisition and control of data; enabling rigorous analysis and data modeling; simulating micro-worlds and real life phenomena; establishing new cognitive linkages between theory and empirical observation; and between abstract constructs and visual representations.

  20. For information - Université de Genève : Accelerator Physics Challenges for the Large Hadron Collider at CERN

    CERN Multimedia

    Université de Genève

    2005-01-01

    UNIVERSITE DE GENEVE Faculte des sciences Section de physique - Département de physique nucléaire et corspusculaire 24, Quai Ernest-Ansermet - 1211 GENEVE 4 Tél : (022) 379 62 73 Fax: (022) 379 69 92 Mercredi 16 March SEMINAIRE DE PHYSIQUE CORPUSCULAIRE à 17h00 - Auditoire Stückelberg Accelerator Physics Challenges for the Large Hadron Collider at CERN Prof. Olivier Bruning / CERN The Large Hadron Collider project at CERN will bring the energy frontier of high energy particle physics back to Europe and with it push the accelerator technology into uncharted teritory. The talk presents the LHC project in the context of the past CERN accelerator developments and addresses the main challenges in terms of technology and accelerator physics. Information: http://dpnc.unige.ch/seminaire/annonce.html Organizer: A. Cervera Villanueva

  1. Hierarchical Acceleration of Multilevel Monte Carlo Methods for Computationally Expensive Simulations in Reservoir Modeling

    Science.gov (United States)

    Zhang, G.; Lu, D.; Webster, C.

    2014-12-01

    The rational management of oil and gas reservoir requires an understanding of its response to existing and planned schemes of exploitation and operation. Such understanding requires analyzing and quantifying the influence of the subsurface uncertainties on predictions of oil and gas production. As the subsurface properties are typically heterogeneous causing a large number of model parameters, the dimension independent Monte Carlo (MC) method is usually used for uncertainty quantification (UQ). Recently, multilevel Monte Carlo (MLMC) methods were proposed, as a variance reduction technique, in order to improve computational efficiency of MC methods in UQ. In this effort, we propose a new acceleration approach for MLMC method to further reduce the total computational cost by exploiting model hierarchies. Specifically, for each model simulation on a new added level of MLMC, we take advantage of the approximation of the model outputs constructed based on simulations on previous levels to provide better initial states of new simulations, which will help improve efficiency by, e.g. reducing the number of iterations in linear system solving or the number of needed time-steps. This is achieved by using mesh-free interpolation methods, such as Shepard interpolation and radial basis approximation. Our approach is applied to a highly heterogeneous reservoir model from the tenth SPE project. The results indicate that the accelerated MLMC can achieve the same accuracy as standard MLMC with a significantly reduced cost.

  2. Scratch as a computational modelling tool for teaching physics

    Science.gov (United States)

    Lopez, Victor; Hernandez, Maria Isabel

    2015-05-01

    The Scratch online authoring tool, which features a simple programming language that has been adapted to primary and secondary students, is being used more and more in schools as it offers students and teachers the opportunity to use a tool to build scientific models and evaluate their behaviour, just as can be done with computational modelling programs. In this article, we briefly discuss why Scratch could be a useful tool for computational modelling in the primary or secondary physics classroom, and we present practical examples of how it can be used to build a model.

  3. Particle accelerators, colliders, and the story of high energy physics charming the cosmic snake

    CERN Document Server

    Jayakumar, Raghavan

    2012-01-01

    The Nordic mythological Cosmic Serpent, Ouroboros, is said to be coiled in the depths of the sea, surrounding the Earth with its tail in its mouth. In physics, this snake is a metaphor for the Universe, where the head, symbolizing the largest entity – the Cosmos – is one with the tail, symbolizing the smallest – the fundamental particle. Particle accelerators, colliders and detectors are built by physicists and engineers to uncover the nature of the Universe while discovering its building blocks. “Charming the Cosmic Snake” takes the readers through the science behind these experimental machines: the physics principles that each stage of the development of particle accelerators helped to reveal, and the particles they helped to discover. The book culminates with a description of the Large Hadron Collider, one of the world’s largest and most complex machines operating in a 27-km circumference tunnel near Geneva. That collider may prove or disprove many of our basic theories about the nature of matt...

  4. Tsallis entropy and complexity theory in the understanding of physics of precursory accelerating seismicity.

    Science.gov (United States)

    Vallianatos, Filippos; Chatzopoulos, George

    2014-05-01

    Strong observational indications support the hypothesis that many large earthquakes are preceded by accelerating seismic release rates which described by a power law time to failure relation. In the present work, a unified theoretical framework is discussed based on the ideas of non-extensive statistical physics along with fundamental principles of physics such as the energy conservation in a faulted crustal volume undergoing stress loading. We derive the time-to-failure power-law of: a) cumulative number of earthquakes, b) cumulative Benioff strain and c) cumulative energy released in a fault system that obeys a hierarchical distribution law extracted from Tsallis entropy. Considering the analytic conditions near the time of failure, we derive from first principles the time-to-failure power-law and show that a common critical exponent m(q) exists, which is a function of the non-extensive entropic parameter q. We conclude that the cumulative precursory parameters are function of the energy supplied to the system and the size of the precursory volume. In addition the q-exponential distribution which describes the fault system is a crucial factor on the appearance of power-law acceleration in the seismicity. Our results based on Tsallis entropy and the energy conservation gives a new view on the empirical laws derived by other researchers. Examples and applications of this technique to observations of accelerating seismicity will also be presented and discussed. This work was implemented through the project IMPACT-ARC in the framework of action "ARCHIMEDES III-Support of Research Teams at TEI of Crete" (MIS380353) of the Operational Program "Education and Lifelong Learning" and is co-financed by the European Union (European Social Fund) and Greek national funds

  5. Physical-space refraction-corrected transmission ultrasound computed tomography made computationally practical.

    Science.gov (United States)

    Li, Shengying; Mueller, Klaus; Jackowski, Marcel; Dione, Donald; Staib, Lawrence

    2008-01-01

    Transmission Ultrasound Computed Tomography (CT) is strongly affected by the acoustic refraction properties of the imaged tissue, and proper modeling and correction of these effects is crucial to achieving high-quality image reconstructions. A method that can account for these refractive effects solves the governing Eikonal equation within an iterative reconstruction framework, using a wave-front tracking approach. Excellent results can be obtained, but at considerable computational expense. Here, we report on the acceleration of three Eikonal solvers (Fast Marching Method (FMM), Fast Sweeping Method (FSM), Fast Iterative Method (FIM)) on three computational platforms (commodity graphics hardware (GPUs), multi-core and cluster CPUs), within this refractive Transmission Ultrasound CT framework. Our efforts provide insight into the capabilities of the various architectures for acoustic wave-front tracking, and they also yield a framework that meets the interactive demands of clinical practice, without a loss in reconstruction quality.

  6. Computing Principal Eigenvectors of Large Web Graphs: Algorithms and Accelerations Related to PageRank and HITS

    Science.gov (United States)

    Nagasinghe, Iranga

    2010-01-01

    This thesis investigates and develops a few acceleration techniques for the search engine algorithms used in PageRank and HITS computations. PageRank and HITS methods are two highly successful applications of modern Linear Algebra in computer science and engineering. They constitute the essential technologies accounted for the immense growth and…

  7. Computing Principal Eigenvectors of Large Web Graphs: Algorithms and Accelerations Related to PageRank and HITS

    Science.gov (United States)

    Nagasinghe, Iranga

    2010-01-01

    This thesis investigates and develops a few acceleration techniques for the search engine algorithms used in PageRank and HITS computations. PageRank and HITS methods are two highly successful applications of modern Linear Algebra in computer science and engineering. They constitute the essential technologies accounted for the immense growth and…

  8. Computing trends using graphic processor in high energy physics

    CERN Document Server

    Niculescu, Mihai

    2011-01-01

    One of the main challenges in Heavy Energy Physics is to make fast analysis of high amount of experimental and simulated data. At LHC-CERN one p-p event is approximate 1 Mb in size. The time taken to analyze the data and obtain fast results depends on high computational power. The main advantage of using GPU(Graphic Processor Unit) programming over traditional CPU one is that graphical cards bring a lot of computing power at a very low price. Today a huge number of application(scientific, financial etc) began to be ported or developed for GPU, including Monte Carlo tools or data analysis tools for High Energy Physics. In this paper, we'll present current status and trends in HEP using GPU.

  9. Physics Computing '92: Proceedings of the 4th International Conference

    Science.gov (United States)

    de Groot, Robert A.; Nadrchal, Jaroslav

    1993-04-01

    The Table of Contents for the book is as follows: * Preface * INVITED PAPERS * Ab Initio Theoretical Approaches to the Structural, Electronic and Vibrational Properties of Small Clusters and Fullerenes: The State of the Art * Neural Multigrid Methods for Gauge Theories and Other Disordered Systems * Multicanonical Monte Carlo Simulations * On the Use of the Symbolic Language Maple in Physics and Chemistry: Several Examples * Nonequilibrium Phase Transitions in Catalysis and Population Models * Computer Algebra, Symmetry Analysis and Integrability of Nonlinear Evolution Equations * The Path-Integral Quantum Simulation of Hydrogen in Metals * Digital Optical Computing: A New Approach of Systolic Arrays Based on Coherence Modulation of Light and Integrated Optics Technology * Molecular Dynamics Simulations of Granular Materials * Numerical Implementation of a K.A.M. Algorithm * Quasi-Monte Carlo, Quasi-Random Numbers and Quasi-Error Estimates * What Can We Learn from QMC Simulations * Physics of Fluctuating Membranes * Plato, Apollonius, and Klein: Playing with Spheres * Steady States in Nonequilibrium Lattice Systems * CONVODE: A REDUCE Package for Differential Equations * Chaos in Coupled Rotators * Symplectic Numerical Methods for Hamiltonian Problems * Computer Simulations of Surfactant Self Assembly * High-dimensional and Very Large Cellular Automata for Immunological Shape Space * A Review of the Lattice Boltzmann Method * Electronic Structure of Solids in the Self-interaction Corrected Local-spin-density Approximation * Dedicated Computers for Lattice Gauge Theory Simulations * Physics Education: A Survey of Problems and Possible Solutions * Parallel Computing and Electronic-Structure Theory * High Precision Simulation Techniques for Lattice Field Theory * CONTRIBUTED PAPERS * Case Study of Microscale Hydrodynamics Using Molecular Dynamics and Lattice Gas Methods * Computer Modelling of the Structural and Electronic Properties of the Supported Metal Catalysis

  10. MCPLOTS. A particle physics resource based on volunteer computing

    Energy Technology Data Exchange (ETDEWEB)

    Karneyeu, A. [Joint Inst. for Nuclear Research, Moscow (Russian Federation); Mijovic, L. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Irfu/SPP, CEA-Saclay, Gif-sur-Yvette (France); Prestel, S. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Lund Univ. (Sweden). Dept. of Astronomy and Theoretical Physics; Skands, P.Z. [European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2013-07-15

    The mcplots.cern.ch web site (MCPLOTS) provides a simple online repository of plots made with high-energy-physics event generators, comparing them to a wide variety of experimental data. The repository is based on the HEPDATA online database of experimental results and on the RIVET Monte Carlo analysis tool. The repository is continually updated and relies on computing power donated by volunteers, via the LHC rate at HOME 2.0 platform.

  11. MCPLOTS: a particle physics resource based on volunteer computing

    CERN Document Server

    Karneyeu, A; Prestel, S; Skands, P Z

    2014-01-01

    The mcplots.cern.ch web site (MCPLOTS) provides a simple online repository of plots made with high-energy-physics event generators, comparing them to a wide variety of experimental data. The repository is based on the HEPDATA online database of experimental results and on the RIVET Monte Carlo analysis tool. The repository is continually updated and relies on computing power donated by volunteers, via the LHC@HOME platform.

  12. Symbolic Computation, Number Theory, Special Functions, Physics and Combinatorics

    CERN Document Server

    Ismail, Mourad

    2001-01-01

    These are the proceedings of the conference "Symbolic Computation, Number Theory, Special Functions, Physics and Combinatorics" held at the Department of Mathematics, University of Florida, Gainesville, from November 11 to 13, 1999. The main emphasis of the conference was Com­ puter Algebra (i. e. symbolic computation) and how it related to the fields of Number Theory, Special Functions, Physics and Combinatorics. A subject that is common to all of these fields is q-series. We brought together those who do symbolic computation with q-series and those who need q-series in­ cluding workers in Physics and Combinatorics. The goal of the conference was to inform mathematicians and physicists who use q-series of the latest developments in the field of q-series and especially how symbolic computa­ tion has aided these developments. Over 60 people were invited to participate in the conference. We ended up having 45 participants at the conference, including six one hour plenary speakers and 28 half hour speakers. T...

  13. On the Physical Explanation for Quantum Computational Speedup

    CERN Document Server

    Cuffaro, Michael E

    2013-01-01

    The aim of this dissertation is to clarify the debate over the explanation of quantum speedup and to submit a tentative resolution to it. In particular, I argue that the physical explanation for quantum speedup is precisely the fact that the phenomenon of quantum entanglement enables a quantum computer to fully exploit the representational capacity of Hilbert space. This is impossible for classical systems, joint states of which must always be representable as product states. Chapter 2 begins with a discussion of the most popular of the candidate physical explanations for quantum speedup: the many worlds explanation. I argue that unlike the neo-Everettian interpretation of quantum mechanics it does not have the conceptual resources required to overcome the `preferred basis objection'. I further argue that the many worlds explanation, at best, can serve as a good description of the physical process which takes place in so-called network-based computation, but that it is incompatible with other models of comput...

  14. Theory of non-local point transformations - Part 3: Theory of NLPT-acceleration and the physical origin of acceleration effects in curved space-times

    CERN Document Server

    Tessarotto, Massimo

    2016-01-01

    This paper is motivated by the introduction of a new functional setting of General Relativity (GR) based on the adoption of suitable group non-local point transformations (NLPT). Unlike the customary local point transformatyion usually utilized in GR, these transformations map in each other intrinsically different curved space-times. In this paper the problem is posed of determining the tensor transformation laws holding for the $4-$% acceleration with respect to the group of general NLPT. Basic physical implications are considered. These concern in particular the identification of NLPT-acceleration effects, namely the relationship established via general NLPT between the $4-$accelerations existing in different curved-space times. As a further application the tensor character of the EM Faraday tensor.with respect to the NLPT-group is established.

  15. A 2 MV Van de Graaff accelerator as a tool for planetary and impact physics research.

    Science.gov (United States)

    Mocker, Anna; Bugiel, Sebastian; Auer, Siegfried; Baust, Günter; Colette, Andrew; Drake, Keith; Fiege, Katherina; Grün, Eberhard; Heckmann, Frieder; Helfert, Stefan; Hillier, Jonathan; Kempf, Sascha; Matt, Günter; Mellert, Tobias; Munsat, Tobin; Otto, Katharina; Postberg, Frank; Röser, Hans-Peter; Shu, Anthony; Sternovsky, Zoltán; Srama, Ralf

    2011-09-01

    Investigating the dynamical and physical properties of cosmic dust can reveal a great deal of information about both the dust and its many sources. Over recent years, several spacecraft (e.g., Cassini, Stardust, Galileo, and Ulysses) have successfully characterised interstellar, interplanetary, and circumplanetary dust using a variety of techniques, including in situ analyses and sample return. Charge, mass, and velocity measurements of the dust are performed either directly (induced charge signals) or indirectly (mass and velocity from impact ionisation signals or crater morphology) and constrain the dynamical parameters of the dust grains. Dust compositional information may be obtained via either time-of-flight mass spectrometry of the impact plasma or direct sample return. The accurate and reliable interpretation of collected spacecraft data requires a comprehensive programme of terrestrial instrument calibration. This process involves accelerating suitable solar system analogue dust particles to hypervelocity speeds in the laboratory, an activity performed at the Max Planck Institut für Kernphysik in Heidelberg, Germany. Here, a 2 MV Van de Graaff accelerator electrostatically accelerates charged micron and submicron-sized dust particles to speeds up to 80 km s(-1). Recent advances in dust production and processing have allowed solar system analogue dust particles (silicates and other minerals) to be coated with a thin conductive shell, enabling them to be charged and accelerated. Refinements and upgrades to the beam line instrumentation and electronics now allow for the reliable selection of particles at velocities of 1-80 km s(-1) and with diameters of between 0.05 μm and 5 μm. This ability to select particles for subsequent impact studies based on their charges, masses, or velocities is provided by a particle selection unit (PSU). The PSU contains a field programmable gate array, capable of monitoring in real time the particles' speeds and charges, and

  16. Technical Challenges and Scientific Payoffs of Muon BeamAccelerators for Particle Physics

    Energy Technology Data Exchange (ETDEWEB)

    Zisman, Michael S.

    2007-09-25

    Historically, progress in particle physics has largely beendetermined by development of more capable particle accelerators. Thistrend continues today with the recent advent of high-luminosityelectron-positron colliders at KEK and SLAC operating as "B factories,"the imminent commissioning of the Large Hadron Collider at CERN, and theworldwide development effort toward the International Linear Collider.Looking to the future, one of the most promising approaches is thedevelopment of muon-beam accelerators. Such machines have very highscientific potential, and would substantially advance thestate-of-the-art in accelerator design. A 20-50 GeV muon storage ringcould serve as a copious source of well-characterized electron neutrinosor antineutrinos (a Neutrino Factory), providing beams aimed at detectorslocated 3000-7500 km from the ring. Such long baseline experiments areexpected to be able to observe and characterize the phenomenon ofcharge-conjugation-parity (CP) violation in the lepton sector, and thusprovide an answer to one of the most fundamental questions in science,namely, why the matter-dominated universe in which we reside exists atall. By accelerating muons to even higher energies of several TeV, we canenvision a Muon Collider. In contrast with composite particles likeprotons, muons are point particles. This means that the full collisionenergy is available to create new particles. A Muon Collider has roughlyten times the energy reach of a proton collider at the same collisionenergy, and has a much smaller footprint. Indeed, an energy frontier MuonCollider could fit on the site of an existing laboratory, such asFermilab or BNL. The challenges of muon-beam accelerators are related tothe facts that i) muons are produced as a tertiary beam, with very large6D phase space, and ii) muons are unstable, with a lifetime at rest ofonly 2 microseconds. How these challenges are accommodated in theaccelerator design will be described. Both a Neutrino Factory and a Muon

  17. A 2 MV Van de Graaff accelerator as a tool for planetary and impact physics research

    Energy Technology Data Exchange (ETDEWEB)

    Mocker, Anna; Bugiel, Sebastian; Srama, Ralf [IRS, Universitaet Stuttgart, Pfaffenwaldring 31, D-70569 Stuttgart (Germany); MPI fuer Kernphysik, Saupfercheckweg 1, D-69117 Heidelberg (Germany); Auer, Siegfried [A and M Associates, PO Box 421, Basye, Virginia 22810 (United States); Baust, Guenter; Matt, Guenter; Otto, Katharina [MPI fuer Kernphysik, Saupfercheckweg 1, D-69117 Heidelberg (Germany); Colette, Andrew; Drake, Keith; Kempf, Sascha; Munsat, Tobin; Shu, Anthony; Sternovsky, Zoltan [LASP, University of Colorado, 1234 Innovation Drive, Boulder, Colorado 80303 (United States); Colorado Center for Lunar Dust and Atmospheric Studies, University of Colorado, Boulder, Colorado 80303 (United States); Fiege, Katherina; Postberg, Frank [Institut fuer Geowissenschaften, Universitaet Heidelberg, D-69120 Stuttgart (Germany); MPI fuer Kernphysik, Saupfercheckweg 1, D-69117 Heidelberg (Germany); Gruen, Eberhard [MPI fuer Kernphysik, Saupfercheckweg 1, D-69117 Heidelberg (Germany); LASP, University of Colorado, 1234 Innovation Drive, Boulder, Colorado 80303 (United States); Heckmann, Frieder [Steinbeis-Innovationszentrum Raumfahrt, Gaeufelden (Germany); Helfert, Stefan [Helfert Informatik, Mannheim (Germany); Hillier, Jonathan [Institut fuer Geowissenschaften, Universitaet Heidelberg, D-69120 Stuttgart (Germany); Mellert, Tobias [IRS, Universitaet Stuttgart, Pfaffenwaldring 31, D-70569 Stuttgart (Germany); and others

    2011-09-15

    Investigating the dynamical and physical properties of cosmic dust can reveal a great deal of information about both the dust and its many sources. Over recent years, several spacecraft (e.g., Cassini, Stardust, Galileo, and Ulysses) have successfully characterised interstellar, interplanetary, and circumplanetary dust using a variety of techniques, including in situ analyses and sample return. Charge, mass, and velocity measurements of the dust are performed either directly (induced charge signals) or indirectly (mass and velocity from impact ionisation signals or crater morphology) and constrain the dynamical parameters of the dust grains. Dust compositional information may be obtained via either time-of-flight mass spectrometry of the impact plasma or direct sample return. The accurate and reliable interpretation of collected spacecraft data requires a comprehensive programme of terrestrial instrument calibration. This process involves accelerating suitable solar system analogue dust particles to hypervelocity speeds in the laboratory, an activity performed at the Max Planck Institut fuer Kernphysik in Heidelberg, Germany. Here, a 2 MV Van de Graaff accelerator electrostatically accelerates charged micron and submicron-sized dust particles to speeds up to 80 km s{sup -1}. Recent advances in dust production and processing have allowed solar system analogue dust particles (silicates and other minerals) to be coated with a thin conductive shell, enabling them to be charged and accelerated. Refinements and upgrades to the beam line instrumentation and electronics now allow for the reliable selection of particles at velocities of 1-80 km s{sup -1} and with diameters of between 0.05 {mu}m and 5 {mu}m. This ability to select particles for subsequent impact studies based on their charges, masses, or velocities is provided by a particle selection unit (PSU). The PSU contains a field programmable gate array, capable of monitoring in real time the particles' speeds

  18. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  19. Limits on efficient computation in the physical world

    Science.gov (United States)

    Aaronson, Scott Joel

    More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In particular, any quantum algorithm that solves the collision problem---that of deciding whether a sequence of n integers is one-to-one or two-to-one---must query the sequence O (n1/5) times. This resolves a question that was open for years; previously no lower bound better than constant was known. A corollary is that there is no "black-box" quantum algorithm to break cryptographic hash functions or solve the Graph Isomorphism problem in polynomial time. I also show that relative to an oracle, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform "quantum advice states"; and that any quantum algorithm needs O (2n/4/n) queries to find a local minimum of a black-box function on the n-dimensional hypercube. Surprisingly, the latter result also leads to new classical lower bounds for the local search problem. Finally, I give new lower bounds on quantum one-way communication complexity, and on the quantum query complexity of total Boolean functions and recursive Fourier sampling. The second part of the thesis studies the relationship of the quantum computing model to physical reality. I first examine the arguments of Leonid Levin, Stephen Wolfram, and others who believe quantum computing to be fundamentally impossible. I find their arguments unconvincing without a "Sure

  20. A Framework for Understanding Physics Students' Computational Modeling Practices

    Science.gov (United States)

    Lunk, Brandon Robert

    With the growing push to include computational modeling in the physics classroom, we are faced with the need to better understand students' computational modeling practices. While existing research on programming comprehension explores how novices and experts generate programming algorithms, little of this discusses how domain content knowledge, and physics knowledge in particular, can influence students' programming practices. In an effort to better understand this issue, I have developed a framework for modeling these practices based on a resource stance towards student knowledge. A resource framework models knowledge as the activation of vast networks of elements called "resources." Much like neurons in the brain, resources that become active can trigger cascading events of activation throughout the broader network. This model emphasizes the connectivity between knowledge elements and provides a description of students' knowledge base. Together with resources resources, the concepts of "epistemic games" and "frames" provide a means for addressing the interaction between content knowledge and practices. Although this framework has generally been limited to describing conceptual and mathematical understanding, it also provides a means for addressing students' programming practices. In this dissertation, I will demonstrate this facet of a resource framework as well as fill in an important missing piece: a set of epistemic games that can describe students' computational modeling strategies. The development of this theoretical framework emerged from the analysis of video data of students generating computational models during the laboratory component of a Matter & Interactions: Modern Mechanics course. Student participants across two semesters were recorded as they worked in groups to fix pre-written computational models that were initially missing key lines of code. Analysis of this video data showed that the students' programming practices were highly influenced by

  1. Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU.

    Directory of Open Access Journals (Sweden)

    Wei-Jen Wang

    Full Text Available This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP in MATLAB by using external function calls to a graphics processing unit (GPU. DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis.

  2. Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU.

    Science.gov (United States)

    Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan

    2013-01-01

    This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis.

  3. Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU

    Science.gov (United States)

    Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan

    2013-01-01

    This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis. PMID:23840507

  4. An Experimental and Computational Study of a Shock-Accelerated Heavy Gas Cylinder

    Science.gov (United States)

    Zoldi, Cindy; Prestridge, Katherine; Tomkins, Christopher; Marr-Lyon, Mark; Rightley, Paul; Benjamin, Robert; Vorobieff, Peter

    2002-11-01

    We present updated results of an experimental and computational study that examines the evolution of a heavy gas (SF_6) cylinder surrounded by air when accelerated by a planar Mach 1.2 shock wave. From each shock tube experiment, we obtain one image of the experimental initial conditions and six images of the time evolution of the cylinder. Moreover, the implementation of Particle Image Velocimetry (PIV) also allows us to determine the velocity field at the last experimental time. Simulations incorporating the two-dimensional image of the experimental initial conditions are performed using the adaptive-mesh Eulerian code, RAGE. A computational study shows that agreement between the measured and computed velocities is achieved by decreasing the peak SF6 concentration to 60%, which was measured in the previous "gas curtain" experiments, and diffusing the air/SF6 interface in the experimental initial conditions. These modifications are consistent with the observation that the SF6 gas diffuses faster than the fog particles used to track the gas. Images of the experimental initial conditions, obtained using planar laser Rayleigh scattering, quantifies the diffusion lag between the SF6 gas and the fog particles.

  5. Parallelizing Epistasis Detection in GWAS on FPGA and GPU-Accelerated Computing Systems.

    Science.gov (United States)

    González-Domínguez, Jorge; Wienbrandt, Lars; Kässens, Jan Christian; Ellinghaus, David; Schimmler, Manfred; Schmidt, Bertil

    2015-01-01

    High-throughput genotyping technologies (such as SNP-arrays) allow the rapid collection of up to a few million genetic markers of an individual. Detecting epistasis (based on 2-SNP interactions) in Genome-Wide Association Studies is an important but time consuming operation since statistical computations have to be performed for each pair of measured markers. Computational methods to detect epistasis therefore suffer from prohibitively long runtimes; e.g., processing a moderately-sized dataset consisting of about 500,000 SNPs and 5,000 samples requires several days using state-of-the-art tools on a standard 3 GHz CPU. In this paper, we demonstrate how this task can be accelerated using a combination of fine-grained and coarse-grained parallelism on two different computing systems. The first architecture is based on reconfigurable hardware (FPGAs) while the second architecture uses multiple GPUs connected to the same host. We show that both systems can achieve speedups of around four orders-of-magnitude compared to the sequential implementation. This significantly reduces the runtimes for detecting epistasis to only a few minutes for moderately-sized datasets and to a few hours for large-scale datasets.

  6. Locomotion without a brain: physical reservoir computing in tensegrity structures.

    Science.gov (United States)

    Caluwaerts, K; D'Haene, M; Verstraeten, D; Schrauwen, B

    2013-01-01

    Embodiment has led to a revolution in robotics by not thinking of the robot body and its controller as two separate units, but taking into account the interaction of the body with its environment. By investigating the effect of the body on the overall control computation, it has been suggested that the body is effectively performing computations, leading to the term morphological computation. Recent work has linked this to the field of reservoir computing, allowing one to endow morphologies with a theory of universal computation. In this work, we study a family of highly dynamic body structures, called tensegrity structures, controlled by one of the simplest kinds of "brains." These structures can be used to model biomechanical systems at different scales. By analyzing this extreme instantiation of compliant structures, we demonstrate the existence of a spectrum of choices of how to implement control in the body-brain composite. We show that tensegrity structures can maintain complex gaits with linear feedback control and that external feedback can intrinsically be integrated in the control loop. The various linear learning rules we consider differ in biological plausibility, and no specific assumptions are made on how to implement the feedback in a physical system.

  7. Quantum computation and the physical computation level of biological information processing

    CERN Document Server

    Castagnoli, Giuseppe

    2009-01-01

    On the basis of introspective analysis, we establish a crucial requirement for the physical computation basis of consciousness: it should allow processing a significant amount of information together at the same time. Classical computation does not satisfy the requirement. At the fundamental physical level, it is a network of two body interactions, each the input-output transformation of a universal Boolean gate. Thus, it cannot process together at the same time more than the three bit input of this gate - many such gates in parallel do not count since the information is not processed together. Quantum computation satisfies the requirement. At the light of our recent explanation of the speed up, quantum measurement of the solution of the problem is analogous to a many body interaction between the parts of a perfect classical machine, whose mechanical constraints represent the problem to be solved. The many body interaction satisfies all the constraints together at the same time, producing the solution in one ...

  8. X-ray beam hardening correction for measuring density in linear accelerator industrial computed tomography

    Institute of Scientific and Technical Information of China (English)

    ZHOU Ri-Feng; WANG Jue; CHEN Wei-Min

    2009-01-01

    Due to X-ray attenuation being approximately proportional to material density, it is possible to measure the inner density through Industrial Computed Tomography (ICT) images accurately. In practice, however, a number of factors including the non-linear effects of beam hardening and diffuse scattered radia-tion complicate the quantitative measurement of density variations in materials. This paper is based on the linearization method of beam hardening correction, and uses polynomial fitting coefficient which is obtained by the curvature of iron polychromatic beam data to fit other materials. Through theoretical deduction, the paper proves that the density measure error is less than 2% if using pre-filters to make the spectrum of linear accelerator range mainly 0.3 MeV to 3 MeV. Experiment had been set up at an ICT system with a 9 MeV electron linear accelerator. The result is satisfactory. This technique makes the beam hardening correction easy and simple, and it is valuable for measuring the ICT density and making use of the CT images to recognize materials.

  9. Computational Physics? Some perspectives and responses of the undergraduate physics community

    Science.gov (United States)

    Chonacky, Norman

    2011-03-01

    Any of the many answers possible to the evocative question ``What is ...'' will likely be heavily shaded by the experience of the respondent. This is partly due to absence of a canon of practice in this still immature, hence dynamic and exciting, method of physics. The diversity of responses is even more apparent in the area of physics education, and more disruptive because an undergraduate educational canon uniformly accepted across institutions for decades already exists. I will present evidence of this educational community's lagging response to the challenge of the current dynamic and diverse practice of computational physics in research. I will also summarize current measures that attempt respond to this lag, discuss a researched-based approach for moving beyond these early measures, and suggest how DCOMP might help. I hope this will generate criticisms and concurrences from the floor. Research support for material in this talk was from: IEEE-Computer Society; Shodor Foundation; Teragrid Project.

  10. A Fast GPU-accelerated Mixed-precision Strategy for Fully NonlinearWater Wave Computations

    DEFF Research Database (Denmark)

    Glimberg, Stefan Lemvig; Engsig-Karup, Allan Peter; Madsen, Morten G.

    2011-01-01

    We present performance results of a mixed-precision strategy developed to improve a recently developed massively parallel GPU-accelerated tool for fast and scalable simulation of unsteady fully nonlinear free surface water waves over uneven depths (Engsig-Karup et.al. 2011). The underlying wave...... model is based on a potential flow formulation, which requires efficient solution of a Laplace problem at large-scales. We report recent results on a new mixed-precision strategy for efficient iterative high-order accurate and scalable solution of the Laplace problem using a multigrid......-preconditioned defect correction method. The improved strategy improves the performance by exploiting architectural features of modern GPUs for mixed precision computations and is tested in a recently developed generic library for fast prototyping of PDE solvers. The new wave tool is applicable to solve and analyze...

  11. Flexusi Interface Builder For Computer Based Accelerator Monitoring And Control System

    CERN Document Server

    Kurakin, V G; Kurakin, P V

    2004-01-01

    We have developed computer code for any desired graphics user interface designing for monitoring and control system at the executable level. This means that operator can build up measurement console consisting of virtual devices before or even during real experiment without recompiling source file. Such functionality results in number of advantages comparing with traditional programming. First of all any risk disappears to introduce bug into source code. Another important thing is the fact the both program developers and operator staff do not interface in developing ultimate product (measurement console). Thus, small team without detailed project can design even very complicated monitoring and control system. For the reason mentioned below, approach suggested is especially helpful for large complexes to be monitored and control, accelerator being among them. The program code consists of several modules, responsible for data acquisition, control and representation. Borland C++ Builder technologies based on VCL...

  12. GPU-accelerated computational tool for studying the effectiveness of asteroid disruption techniques

    Science.gov (United States)

    Zimmerman, Ben J.; Wie, Bong

    2016-10-01

    This paper presents the development of a new Graphics Processing Unit (GPU) accelerated computational tool for asteroid disruption techniques. Numerical simulations are completed using the high-order spectral difference (SD) method. Due to the compact nature of the SD method, it is well suited for implementation with the GPU architecture, hence solutions are generated at orders of magnitude faster than the Central Processing Unit (CPU) counterpart. A multiphase model integrated with the SD method is introduced, and several asteroid disruption simulations are conducted, including kinetic-energy impactors, multi-kinetic energy impactor systems, and nuclear options. Results illustrate the benefits of using multi-kinetic energy impactor systems when compared to a single impactor system. In addition, the effectiveness of nuclear options is observed.

  13. Extreme Scale Computing for First-Principles Plasma Physics Research

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Choogn-Seock [Princeton University

    2011-10-12

    World superpowers are in the middle of the “Computnik” race. US Department of Energy (and National Nuclear Security Administration) wishes to launch exascale computer systems into the scientific (and national security) world by 2018. The objective is to solve important scientific problems and to predict the outcomes using the most fundamental scientific laws, which would not be possible otherwise. Being chosen into the next “frontier” group can be of great benefit to a scientific discipline. An extreme scale computer system requires different types of algorithms and programming philosophy from those we have been accustomed to. Only a handful of scientific codes are blessed to be capable of scalable usage of today’s largest computers in operation at petascale (using more than 100,000 cores concurrently). Fortunately, a few magnetic fusion codes are competing well in this race using the “first principles” gyrokinetic equations.These codes are beginning to study the fusion plasma dynamics in full-scale realistic diverted device geometry in natural nonlinear multiscale, including the large scale neoclassical and small scale turbulence physics, but excluding some ultra fast dynamics. In this talk, most of the above mentioned topics will be introduced at executive level. Representative properties of the extreme scale computers, modern programming exercises to take advantage of them, and different philosophies in the data flows and analyses will be presented. Examples of the multi-scale multi-physics scientific discoveries made possible by solving the gyrokinetic equations on extreme scale computers will be described. Future directions into “virtual tokamak experiments” will also be discussed.

  14. Muon Sources for Particle Physics - Accomplishments of the Muon Accelerator Program

    Energy Technology Data Exchange (ETDEWEB)

    Neuffer, D. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Stratakis, D. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Palmer, M. [Brookhaven National Lab. (BNL), Upton, NY (United States); Delahaye, J.-P. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Summers, D. [Univ. of Mississippi, Oxford, MS (United States); Ryne, R. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cummings, M. A. [Muons, Inc., Batavia, IL(United States)

    2017-05-01

    The Muon Accelerator Program (MAP) completed a four-year study on the feasibility of muon colliders and on using stored muon beams for neutrinos. That study was broadly successful in its goals, establishing the feasibility of lepton colliders from the 125 GeV Higgs Factory to more than 10 TeV, as well as exploring using a μ storage ring (MSR) for neutrinos, and establishing that MSRs could provide factory-level intensities of νe (ν$\\bar{e}$) and ν$\\bar{μ}$) (νμ) beams. The key components of the collider and neutrino factory systems were identified. Feasible designs and detailed simulations of all of these components were obtained, including some initial hardware component tests, setting the stage for future implementation where resources are available and clearly associated physics goals become apparent

  15. European facilities for accelerator neutrino physics: perspectives for the decade to come

    CERN Document Server

    Battiston, R; Migliozzi, P; Terranova, F

    2009-01-01

    Very soon a new generation of reactor and accelerator neutrino oscillation experiments - Double Chooz, Daya Bay, Reno and T2K - will seek for oscillation signals generated by the mixing parameter theta_13. The knowledge of this angle is a fundamental milestone to optimize further experiments aimed at detecting CP violation in the neutrino sector. Leptonic CP violation is a key phenomenon that has profound implications in particle physics and cosmology but it is clearly out of reach for the aforementioned experiments. Since late 90's, a world-wide activity is in progress to design facilities that can access CP violation in neutrino oscillation and perform high precision measurements of the lepton counterpart of the Cabibbo-Kobayashi-Maskawa matrix. In this paper the status of these studies will be summarized, focusing on the options that are best suited to exploit existing European facilities (firstly CERN and the INFN Gran Sasso Laboratories) or technologies where Europe has a world leadership. Similar consid...

  16. Physics analyses of an accelerator-driven sub-critical assembly

    Energy Technology Data Exchange (ETDEWEB)

    Naberezhnev, Dmitry G. [Nuclear Engineering Division, Argonne National Laboratory, 9700 S. Cass Av., Argonne, IL 60439 (United States)]. E-mail: dimitri@anl.gov; Gohar, Yousry [Nuclear Engineering Division, Argonne National Laboratory, 9700 S. Cass Av., Argonne, IL 60439 (United States); Bailey, James [Nuclear Engineering Division, Argonne National Laboratory, 9700 S. Cass Av., Argonne, IL 60439 (United States); Belch, Henry [Nuclear Engineering Division, Argonne National Laboratory, 9700 S. Cass Av., Argonne, IL 60439 (United States)

    2006-06-23

    Physics analyses have been performed for an accelerator-driven sub-critical assembly as a part of the Argonne National Laboratory activity in preparation for a joint conceptual design with the Kharkov Institute of Physics and Technology (KIPT) of Ukraine. KIPT has a plan to construct an accelerator-driven sub-critical assembly targeted towards the medical isotope production and the support of the Ukraine nuclear industry. The external neutron source is produced either through photonuclear reactions in tungsten or uranium targets, or deuteron reactions in a beryllium target. KIPT intends using the high-enriched uranium (HEU) for the fuel of the sub-critical assembly. The main objective of this paper is to study the possibility of utilizing low-enriched uranium (LEU) fuel instead of HEU fuel without penalizing the sub-critical assembly performance, in particular the neutron flux level. In the course of this activity, several studies have been carried out to investigate the main choices for the system's parameters. The external neutron source has been characterized and a pre-conceptual target design has been developed. Several sub-critical configurations with different fuel enrichments and densities have been considered. Based on our analysis, it was shown that the performance of the LEU fuel is comparable with that of the HEU fuel. The LEU fuel sub-critical assembly with 200-MeV electron energy and 100-kW electron beam power has an average total flux of {approx}2.50x10{sup 13} n/s cm{sup 2} in the irradiation channels. The corresponding total facility power is {approx}204 kW divided into 91 and 113 kW deposited in the target and sub-critical assemblies, respectively.

  17. Computer Based Collaborative Problem Solving for Introductory Courses in Physics

    Science.gov (United States)

    Ilie, Carolina; Lee, Kevin

    2010-03-01

    We discuss collaborative problem solving computer-based recitation style. The course is designed by Lee [1], and the idea was proposed before by Christian, Belloni and Titus [2,3]. The students find the problems on a web-page containing simulations (physlets) and they write the solutions on an accompanying worksheet after discussing it with a classmate. Physlets have the advantage of being much more like real-world problems than textbook problems. We also compare two protocols for web-based instruction using simulations in an introductory physics class [1]. The inquiry protocol allowed students to control input parameters while the worked example protocol did not. We will discuss which of the two methods is more efficient in relation to Scientific Discovery Learning and Cognitive Load Theory. 1. Lee, Kevin M., Nicoll, Gayle and Brooks, Dave W. (2004). ``A Comparison of Inquiry and Worked Example Web-Based Instruction Using Physlets'', Journal of Science Education and Technology 13, No. 1: 81-88. 2. Christian, W., and Belloni, M. (2001). Physlets: Teaching Physics With Interactive Curricular Material, Prentice Hall, Englewood Cliffs, NJ. 3. Christian,W., and Titus,A. (1998). ``Developing web-based curricula using Java Physlets.'' Computers in Physics 12: 227--232.

  18. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  19. An improved coarse-grained parallel algorithm for computational acceleration of ordinary Kriging interpolation

    Science.gov (United States)

    Hu, Hongda; Shu, Hong

    2015-05-01

    Heavy computation limits the use of Kriging interpolation methods in many real-time applications, especially with the ever-increasing problem size. Many researchers have realized that parallel processing techniques are critical to fully exploit computational resources and feasibly solve computation-intensive problems like Kriging. Much research has addressed the parallelization of traditional approach to Kriging, but this computation-intensive procedure may not be suitable for high-resolution interpolation of spatial data. On the basis of a more effective serial approach, we propose an improved coarse-grained parallel algorithm to accelerate ordinary Kriging interpolation. In particular, the interpolation task of each unobserved point is considered as a basic parallel unit. To reduce time complexity and memory consumption, the large right hand side matrix in the Kriging linear system is transformed and fixed at only two columns and therefore no longer directly relevant to the number of unobserved points. The MPI (Message Passing Interface) model is employed to implement our parallel programs in a homogeneous distributed memory system. Experimentally, the improved parallel algorithm performs better than the traditional one in spatial interpolation of annual average precipitation in Victoria, Australia. For example, when the number of processors is 24, the improved algorithm keeps speed-up at 20.8 while the speed-up of the traditional algorithm only reaches 9.3. Likewise, the weak scaling efficiency of the improved algorithm is nearly 90% while that of the traditional algorithm almost drops to 40% with 16 processors. Experimental results also demonstrate that the performance of the improved algorithm is enhanced by increasing the problem size.

  20. Beam Polarization at the ILC: the Physics Impact and the Accelerator Solutions

    Energy Technology Data Exchange (ETDEWEB)

    Aurand, B.; /Bonn U.; Bailey, I.; /Liverpool U.; Bartels, C.; /DESY /DESY, Zeuthen; Brachmann, A.; /SLAC; Clarke, J.; /Daresbury; Hartin, A.; /DESY /DESY, Zeuthen /Oxford U., JAI; Hauptman, J.; /Iowa State U.; Helebrant, C.; /DESY /DESY, Zeuthen; Hesselbach, S.; /Durham U., IPPP; Kafer, D.; List, J.; /DESY /DESY, Zeuthen; Lorenzon, W.; /Michigan U.; Marchesini, I.; Monig, Klaus; /DESY /DESY, Zeuthen; Moffeit, K.C.; /SLAC; Moortgat-Pick, G.; /Durham U., IPPP; Riemann, S.; Schalicke, A.; Schuler, P.; /DESY /DESY, Zeuthen; Starovoitov, P.; /Minsk, NCPHEP; Ushakov, A.; /DESY /DESY, Zeuthen /Bonn U. /SLAC

    2011-11-23

    In this contribution accelerator solutions for polarized beams and their impact on physics measurements are discussed. Focus are physics requirements for precision polarimetry near the interaction point and their realization with polarized sources. Based on the ILC baseline programme as described in the Reference Design Report (RDR), recent developments are discussed and evaluated taking into account physics runs at beam energies between 100 GeV and 250 GeV, as well as calibration runs on the Z-pole and options as the 1TeV upgrade and GigaZ. The studies, talks and discussions presented at this conference demonstrated that beam polarization and its measurement are crucial for the physics success of any future linear collider. To achieve the required precision it is absolutely decisive to employ multiple devices for testing and controlling the systematic uncertainties of each polarimeter. The polarimetry methods for the ILC are complementary: with the upstream polarimeter the measurements are performed in a clean environment, they are fast and allow to monitor time-dependent variations of polarization. The polarimeter downstream the IP will measure the disrupted beam resulting in high background and much lower statistics, but it allows access to the depolarization at the IP. Cross checks between the polarimeter results give redundancy and inter-calibration which is essential for high precision measurements. Current plans and issues for polarimeters and also energy spectrometers in the Beam Delivery System of the ILC are summarized in reference [28]. The ILC baseline design allows already from the beginning the operation with polarized electrons and polarized positrons provided the spin rotation and the fast helicity reversal for positrons will be implemented. A reversal of the positron helicity significantly slower than that of electrons is not recommended to not compromise the precision and hence the success of the ILC. Recently to use calibration data at the Z

  1. Solar physics applications of computer graphics and image processing

    Science.gov (United States)

    Altschuler, M. D.

    1985-01-01

    Computer graphics devices coupled with computers and carefully developed software provide new opportunities to achieve insight into the geometry and time evolution of scalar, vector, and tensor fields and to extract more information quickly and cheaply from the same image data. Two or more different fields which overlay in space can be calculated from the data (and the physics), then displayed from any perspective, and compared visually. The maximum regions of one field can be compared with the gradients of another. Time changing fields can also be compared. Images can be added, subtracted, transformed, noise filtered, frequency filtered, contrast enhanced, color coded, enlarged, compressed, parameterized, and histogrammed, in whole or section by section. Today it is possible to process multiple digital images to reveal spatial and temporal correlations and cross correlations. Data from different observatories taken at different times can be processed, interpolated, and transformed to a common coordinate system.

  2. Solar physics applications of computer graphics and image processing

    Science.gov (United States)

    Altschuler, M. D.

    1985-01-01

    Computer graphics devices coupled with computers and carefully developed software provide new opportunities to achieve insight into the geometry and time evolution of scalar, vector, and tensor fields and to extract more information quickly and cheaply from the same image data. Two or more different fields which overlay in space can be calculated from the data (and the physics), then displayed from any perspective, and compared visually. The maximum regions of one field can be compared with the gradients of another. Time changing fields can also be compared. Images can be added, subtracted, transformed, noise filtered, frequency filtered, contrast enhanced, color coded, enlarged, compressed, parameterized, and histogrammed, in whole or section by section. Today it is possible to process multiple digital images to reveal spatial and temporal correlations and cross correlations. Data from different observatories taken at different times can be processed, interpolated, and transformed to a common coordinate system.

  3. NATO Advanced Study Institute on Methods in Computational Molecular Physics

    CERN Document Server

    Diercksen, Geerd

    1992-01-01

    This volume records the lectures given at a NATO Advanced Study Institute on Methods in Computational Molecular Physics held in Bad Windsheim, Germany, from 22nd July until 2nd. August, 1991. This NATO Advanced Study Institute sought to bridge the quite considerable gap which exist between the presentation of molecular electronic structure theory found in contemporary monographs such as, for example, McWeeny's Methods 0/ Molecular Quantum Mechanics (Academic Press, London, 1989) or Wilson's Electron correlation in moleeules (Clarendon Press, Oxford, 1984) and the realization of the sophisticated computational algorithms required for their practical application. It sought to underline the relation between the electronic structure problem and the study of nuc1ear motion. Software for performing molecular electronic structure calculations is now being applied in an increasingly wide range of fields in both the academic and the commercial sectors. Numerous applications are reported in areas as diverse as catalysi...

  4. Computational physics simulation of classical and quantum systems

    CERN Document Server

    Scherer, Philipp O J

    2017-01-01

    This textbook presents basic numerical methods and applies them to a large variety of physical models in multiple computer experiments. Classical algorithms and more recent methods are explained. Partial differential equations are treated generally comparing important methods, and equations of motion are solved by a large number of simple as well as more sophisticated methods. Several modern algorithms for quantum wavepacket motion are compared. The first part of the book discusses the basic numerical methods, while the second part simulates classical and quantum systems. Simple but non-trivial examples from a broad range of physical topics offer readers insights into the numerical treatment but also the simulated problems. Rotational motion is studied in detail, as are simple quantum systems. A two-level system in an external field demonstrates elementary principles from quantum optics and simulation of a quantum bit. Principles of molecular dynamics are shown. Modern bounda ry element methods are presented ...

  5. Physical processes at work in sub-30 fs, PW laser pulse-driven plasma accelerators: Towards GeV electron acceleration experiments at CILEX facility

    Science.gov (United States)

    Beck, A.; Kalmykov, S. Y.; Davoine, X.; Lifschitz, A.; Shadwick, B. A.; Malka, V.; Specka, A.

    2014-03-01

    Optimal regimes and physical processes at work are identified for the first round of laser wakefield acceleration experiments proposed at a future CILEX facility. The Apollon-10P CILEX laser, delivering fully compressed, near-PW-power pulses of sub-25 fs duration, is well suited for driving electron density wakes in the blowout regime in cm-length gas targets. Early destruction of the pulse (partly due to energy depletion) prevents electrons from reaching dephasing, limiting the energy gain to about 3 GeV. However, the optimal operating regimes, found with reduced and full three-dimensional particle-in-cell simulations, show high energy efficiency, with about 10% of incident pulse energy transferred to 3 GeV electron bunches with sub-5% energy spread, half-nC charge, and absolutely no low-energy background. This optimal acceleration occurs in 2 cm length plasmas of electron density below 1018 cm-3. Due to their high charge and low phase space volume, these multi-GeV bunches are tailor-made for staged acceleration planned in the framework of the CILEX project. The hallmarks of the optimal regime are electron self-injection at the early stage of laser pulse propagation, stable self-guiding of the pulse through the entire acceleration process, and no need for an external plasma channel. With the initial focal spot closely matched for the nonlinear self-guiding, the laser pulse stabilizes transversely within two Rayleigh lengths, preventing subsequent evolution of the accelerating bucket. This dynamics prevents continuous self-injection of background electrons, preserving low phase space volume of the bunch through the plasma. Near the end of propagation, an optical shock builds up in the pulse tail. This neither disrupts pulse propagation nor produces any noticeable low-energy background in the electron spectra, which is in striking contrast with most of existing GeV-scale acceleration experiments.

  6. Physical processes at work in sub-30 fs, PW laser pulse-driven plasma accelerators: Towards GeV electron acceleration experiments at CILEX facility

    Energy Technology Data Exchange (ETDEWEB)

    Beck, A., E-mail: beck@llr.in2p3.fr [Laboratoire Leprince-Ringuet – École Polytechnique, CNRS-IN2P3, Palaiseau 91128 (France); Kalmykov, S.Y., E-mail: skalmykov2@unl.edu [Department of Physics and Astronomy, University of Nebraska – Lincoln, Nebraska 68588-0299 (United States); Davoine, X. [CEA, DAM, DIF, Arpajon F-91297 (France); Lifschitz, A. [Laboratoire d' Optique Appliquée, ENSTA ParisTech-CNRS UMR7639-École Polytechnique, Palaiseau 91762 (France); Shadwick, B.A. [Department of Physics and Astronomy, University of Nebraska – Lincoln, Nebraska 68588-0299 (United States); Malka, V. [Laboratoire d' Optique Appliquée, ENSTA ParisTech-CNRS UMR7639-École Polytechnique, Palaiseau 91762 (France); Specka, A. [Laboratoire Leprince-Ringuet – École Polytechnique, CNRS-IN2P3, Palaiseau 91128 (France)

    2014-03-11

    Optimal regimes and physical processes at work are identified for the first round of laser wakefield acceleration experiments proposed at a future CILEX facility. The Apollon-10P CILEX laser, delivering fully compressed, near-PW-power pulses of sub-25 fs duration, is well suited for driving electron density wakes in the blowout regime in cm-length gas targets. Early destruction of the pulse (partly due to energy depletion) prevents electrons from reaching dephasing, limiting the energy gain to about 3 GeV. However, the optimal operating regimes, found with reduced and full three-dimensional particle-in-cell simulations, show high energy efficiency, with about 10% of incident pulse energy transferred to 3 GeV electron bunches with sub-5% energy spread, half-nC charge, and absolutely no low-energy background. This optimal acceleration occurs in 2 cm length plasmas of electron density below 10{sup 18} cm{sup −3}. Due to their high charge and low phase space volume, these multi-GeV bunches are tailor-made for staged acceleration planned in the framework of the CILEX project. The hallmarks of the optimal regime are electron self-injection at the early stage of laser pulse propagation, stable self-guiding of the pulse through the entire acceleration process, and no need for an external plasma channel. With the initial focal spot closely matched for the nonlinear self-guiding, the laser pulse stabilizes transversely within two Rayleigh lengths, preventing subsequent evolution of the accelerating bucket. This dynamics prevents continuous self-injection of background electrons, preserving low phase space volume of the bunch through the plasma. Near the end of propagation, an optical shock builds up in the pulse tail. This neither disrupts pulse propagation nor produces any noticeable low-energy background in the electron spectra, which is in striking contrast with most of existing GeV-scale acceleration experiments.

  7. PRaVDA: High Energy Physics towards proton Computed Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Price, T., E-mail: t.price@bham.ac.uk

    2016-07-11

    Proton radiotherapy is an increasingly popular modality for treating cancers of the head and neck, and in paediatrics. To maximise the potential of proton radiotherapy it is essential to know the distribution, and more importantly the proton stopping powers, of the body tissues between the proton beam and the tumour. A stopping power map could be measured directly, and uncertainties in the treatment vastly reduce, if the patient was imaged with protons instead of conventional x-rays. Here we outline the application of technologies developed for High Energy Physics to provide clinical-quality proton Computed Tomography, in so reducing range uncertainties and enhancing the treatment of cancer.

  8. Efficient acceleration of mutual information computation for nonrigid registration using CUDA.

    Science.gov (United States)

    Ikeda, Kei; Ino, Fumihiko; Hagihara, Kenichi

    2014-05-01

    In this paper, we propose an efficient acceleration method for the nonrigid registration of multimodal images that uses a graphics processing unit. The key contribution of our method is efficient utilization of on-chip memory for both normalized mutual information (NMI) computation and hierarchical B-spline deformation, which compose a well-known registration algorithm. We implement this registration algorithm as a compute unified device architecture program with an efficient parallel scheme and several optimization techniques such as hierarchical data organization, data reuse, and multiresolution representation. We experimentally evaluate our method with four clinical datasets consisting of up to 512 × 512 × 296 voxels. We find that exploitation of on-chip memory achieves a 12-fold increase in speed over an off-chip memory version and, therefore, it increases the efficiency of parallel execution from 4% to 46%. We also find that our method running on a GeForce GTX 580 card is approximately 14 times faster than a fully optimized CPU-based implementation running on four cores. Some multimodal registration results are also provided to understand the limitation of our method. We believe that our highly efficient method, which completes an alignment task within a few tens of seconds, will be useful to realize rapid nonrigid registration.

  9. SecureMed: Secure Medical Computation using GPU-Accelerated Homomorphic Encryption Scheme.

    Science.gov (United States)

    Khedr, Alhassan; Gulak, Glenn

    2017-01-23

    Sharing the medical records of individuals among healthcare providers and researchers around the world can accelerate advances in medical research. While the idea seems increasingly practical due to cloud data services, maintaining patient privacy is of paramount importance. Standard encryption algorithms help protect sensitive data from outside attackers but they cannot be used to compute on this sensitive data while being encrypted. Homomorphic Encryption (HE) presents a very useful tool that can compute on encrypted data without the need to decrypt it. In this work, we describe an optimized NTRUbased implementation of the GSW homomorphic encryption scheme. Our results show a factor of 58 × improvement in CPU performance compared to other recent work on encrypted medical data under the same security settings. Our system is built to be easily portable to GPUs resulting in an additional speedup of up to a factor of 104 × (and 410 × ) to offer an overall speedup of 6085 × (and 24011 × ) using a single GPU (or four GPUs), respectively.

  10. On the coupling of fields and particles in accelerator and plasma physics

    Energy Technology Data Exchange (ETDEWEB)

    Geloni, Gianluca [European XFEL GmbH, Hamburg (Germany); Kocharyan, Vitali; Saldin, Evgeni [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2016-10-15

    In accelerator and plasma physics it is generally accepted that there is no need to solve the dynamical equations for particles motion in manifestly covariant form, that is by using the coordinate-independent proper time to parameterize particle world-lines in space-time. In other words, in order to describe the dynamical processes in the laboratory frame there is no need to use the laws of relativistic kinematics. It is sufficient to take into account the relativistic dependence of the particles momentum on the velocity in the second Newton's law. Therefore, the coupling of fields and particles is based, on the one hand, on the use of result from particle dynamics treated according to Newton's laws in terms of the relativistic three-momentum and, on the other hand, on the use of Maxwell's equations in standard form. In previous papers we argued that this is a misconception. The purpose of this paper is to describe in detail how to calculate the coupling between fields and particles in a correct way and how to develop a new algorithm for a particle tracking code in agreement with the use of Maxwell's equations in their standard form. Advanced textbooks on classical electrodynamics correctly tell us that Maxwell's equations in standard form in the laboratory frame and charged particles are coupled by introducing particles trajectories as projections of particles world-lines onto coordinates of the laboratory frame and by subsequently using the laboratory time to parameterize the trajectory curves. For the first time we showed a difference between conventional and covariant particle tracking results in the laboratory frame. This essential point has never received attention in the physical community. Only the solution of the dynamical equations in covariant form gives the correct coupling between field equations in standard form and particles trajectories in the laboratory frame. We conclude that previous theoretical and simulation results in

  11. Application of local area networks to accelerator control systems at the Stanford Linear Accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Fox, J.D.; Linstadt, E.; Melen, R.

    1983-03-01

    The history and current status of SLAC's SDLC networks for distributed accelerator control systems are discussed. These local area networks have been used for instrumentation and control of the linear accelerator. Network topologies, protocols, physical links, and logical interconnections are discussed for specific applications in distributed data acquisition and control system, computer networks and accelerator operations.

  12. CERN Accelerator School & ELETTRA Synchrotron Light Laboratory announce a course on "Accelerator Physics" (Intermediate level), at the Abdus Salam International Center for Theoretical Physics, Adriatico Guesthouse, Trieste, Italy, 2 - 14 October 2005

    CERN Multimedia

    2005-01-01

    The Intermediate level course is clearly conceived as the logical continuation of the Introductory level course for those being active in the field of Accelerator Physics. However, it is also often considered as an excellent opportunity to either discover and receive a basic training in a new field, or for refreshing or keeping up-to-date people's expertise in the field.

  13. Computational Particle Physics for Event Generators and Data Analysis

    CERN Document Server

    Perret-Gallix, Denis

    2013-01-01

    High-energy physics data analysis relies heavily on the comparison between experimental and simulated data as stressed lately by the Higgs search at LHC and the recent identification of a Higgs-like new boson. The first link in the full simulation chain is the event generation both for background and for expected signals. Nowadays event generators are based on the automatic computation of matrix element or amplitude for each process of interest. Moreover, recent analysis techniques based on the matrix element likelihood method assign probabilities for every event to belong to any of a given set of possible processes. This method originally used for the top mass measurement, although computing intensive, has shown its power at LHC to extract the new boson signal from the background. Serving both needs, the automatic calculation of matrix element is therefore more than ever of prime importance for particle physics. Initiated in the eighties, the techniques have matured for the lowest order calculations (tree-le...

  14. Analyzing high energy physics data using database computing: Preliminary report

    Science.gov (United States)

    Baden, Andrew; Day, Chris; Grossman, Robert; Lifka, Dave; Lusk, Ewing; May, Edward; Price, Larry

    1991-01-01

    A proof of concept system is described for analyzing high energy physics (HEP) data using data base computing. The system is designed to scale up to the size required for HEP experiments at the Superconducting SuperCollider (SSC) lab. These experiments will require collecting and analyzing approximately 10 to 100 million 'events' per year during proton colliding beam collisions. Each 'event' consists of a set of vectors with a total length of approx. one megabyte. This represents an increase of approx. 2 to 3 orders of magnitude in the amount of data accumulated by present HEP experiments. The system is called the HEPDBC System (High Energy Physics Database Computing System). At present, the Mark 0 HEPDBC System is completed, and can produce analysis of HEP experimental data approx. an order of magnitude faster than current production software on data sets of approx. 1 GB. The Mark 1 HEPDBC System is currently undergoing testing and is designed to analyze data sets 10 to 100 times larger.

  15. A nuclear physics program at the Rare Isotope Beams Accelerator Facility in Korea

    Directory of Open Access Journals (Sweden)

    Chang-Bum Moon

    2014-02-01

    Full Text Available This paper outlines the new physics possibilities that fall within the field of nuclear structure and astrophysics based on experiments with radioactive ion beams at the future Rare Isotope Beams Accelerator facility in Korea. This ambitious multi-beam facility has both an Isotope Separation On Line (ISOL and fragmentation capability to produce rare isotopes beams (RIBs and will be capable of producing and accelerating beams of wide range mass of nuclides with energies of a few to hundreds MeV per nucleon. The large dynamic range of reaccelerated RIBs will allow the optimization in each nuclear reaction case with respect to cross section and channel opening. The low energy RIBs around Coulomb barrier offer nuclear reactions such as elastic resonance scatterings, one or two particle transfers, Coulomb multiple-excitations, fusion-evaporations, and direct capture reactions for the study of the very neutron-rich and proton-rich nuclides. In contrast, the high energy RIBs produced by in-flight fragmentation with reaccelerated ions from the ISOL enable to explore the study of neutron drip lines in intermediate mass regions. The proposed studies aim at investigating the exotic nuclei near and beyond the nucleon drip lines, and to explore how nuclear many-body systems change in such extreme regions by addressing the following topics: the evolution of shell structure in areas of extreme proton to neutron imbalance; the study of the weak interaction in exotic decay schemes such as beta-delayed two-neutron or two-proton emission; the change of isospin symmetry in isobaric mirror nuclei at the drip lines; two protons or two neutrons radioactivity beyond the drip lines; the role of the continuum states including resonant states above the particle-decay threshold in exotic nuclei; and the effects of nuclear reaction rates triggered by the unbound proton-rich nuclei on nuclear astrophysical processes.

  16. Physics and technical development of accelerators; Physique et technique des accelerateurs

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    About 90 registered participants delivered more than 40 scientific papers. A great part of these presentations were of general interest about running projects such as CIME accelerator at Ganil, IPHI (high intensity proton injector), ESRF (European source of synchrotron radiation), LHC (large hadron collider), ELYSE accelerator at Orsay, AIRIX, and VIVITRON tandem accelerator. Other presentations highlighted the latest technological developments of accelerator components: superconducting cavities, power klystrons, high current injectors..

  17. Computations of longitudinal electron dynamics in the recirculating cw RF accelerator-recuperator for the high average power FEL

    Science.gov (United States)

    Sokolov, A. S.; Vinokurov, N. A.

    1994-03-01

    The use of optimal longitudinal phase-energy motion conditions for bunched electrons in a recirculating RF accelerator gives the possibility to increase the final electron peak current and, correspondingly, the FEL gain. The computer code RECFEL, developed for simulations of the longitudinal compression of electron bunches with high average current, essentially loading the cw RF cavities of the recirculator-recuperator, is briefly described and illustrated by some computational results.

  18. Bucharest heavy ion accelerator facility

    Energy Technology Data Exchange (ETDEWEB)

    Ceausescu, V.; Dobrescu, S.; Duma, M.; Indreas, G.; Ivascu, M.; Papureanu, S.; Pascovici, G.; Semenescu, G.

    1986-02-15

    The heavy ion accelerator facility of the Heavy Ion Physics Department at the Institute of Physics and Nuclear Engineering in Bucharest is described. The Tandem accelerator development and the operation of the first stage of the heavy ion postaccelerating system are discussed. Details are given concerning the resonance cavities, the pulsing system matching the dc beam to the RF cavities and the computer control system.

  19. Operational Radiation Protection in High-Energy Physics Accelerators: Implementation of ALARA in Design and Operation of Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Fasso, A.; Rokni, S.; /SLAC

    2011-06-30

    It used to happen often, to us accelerator radiation protection staff, to be asked by a new radiation worker: ?How much dose am I still allowed?? And we smiled looking at the shocked reaction to our answer: ?You are not allowed any dose?. Nowadays, also thanks to improved training programs, this kind of question has become less frequent, but it is still not always easy to convince workers that staying below the exposure limits is not sufficient. After all, radiation is still the only harmful agent for which this is true: for all other risks in everyday life, from road speed limits to concentration of hazardous chemicals in air and water, compliance to regulations is ensured by keeping below a certain value. It appears that a tendency is starting to develop to extend the radiation approach to other pollutants (1), but it will take some time before the new attitude makes it way into national legislations.

  20. Machine learning, computer vision, and probabilistic models in jet physics

    CERN Document Server

    CERN. Geneva; NACHMAN, Ben

    2015-01-01

    In this talk we present recent developments in the application of machine learning, computer vision, and probabilistic models to the analysis and interpretation of LHC events. First, we will introduce the concept of jet-images and computer vision techniques for jet tagging. Jet images enabled the connection between jet substructure and tagging with the fields of computer vision and image processing for the first time, improving the performance to identify highly boosted W bosons with respect to state-of-the-art methods, and providing a new way to visualize the discriminant features of different classes of jets, adding a new capability to understand the physics within jets and to design more powerful jet tagging methods. Second, we will present Fuzzy jets: a new paradigm for jet clustering using machine learning methods. Fuzzy jets view jet clustering as an unsupervised learning task and incorporate a probabilistic assignment of particles to jets to learn new features of the jet structure. In particular, we wi...

  1. Advanced methods for the computation of particle beam transport and the computation of electromagnetic fields and beam-cavity interactions. [Dept. of Physics, Univ. of Maryland, College Park Maryland

    Energy Technology Data Exchange (ETDEWEB)

    Dragt, A.J.; Gluckstern, R.L.

    1993-06-01

    The University of Maryland Dynamical Systems and Accelerator Theory Group has been carrying out long-term research work in the general area of Dynamical Systems with a particular emphasis on applications to Accelerator Physics. This work is broadly divided into two tasks: Charged Particle Beam Transport and the Computation of Electromagnetic Fields and Beam-Cavity Interactions. Each of these tasks is described briefly. Work is devoted both to the development of new methods and the application of these methods to problems of current interest in accelerator physics including the theoretical performance of present and proposed high energy machines. In addition to its research effort, the Dynamical Systems and Accelerator Theory Group is actively engaged in the education of students and postdoctoral research associates.

  2. Activity report of working party on reactor physics of accelerator-driven system. July 1999 to March 2001

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-02-01

    Under the Research Committee on Reactor Physics, the Working Party on Reactor Physics of Accelerator-Driven System (ADS-WP) was set in July 1999 to review and investigate special subjects related to reactor physics research for the Accelerator-Driven Subcritical System (ADS). The ADS-WP, at the first meeting, discussed a guideline of its activity for two years and decided to concentrate upon three subjects: (1) neutron transport calculations in high energy range, (2) static and kinetic (safety-related) characteristics of subcritical system, and (3) system design including ADS concepts and elemental technology developments required. The activity of ADS-WP continued from July 1999 to March 2001. In this duration, the members of ADS-WP met together four times and discussed the above subjects. In addition, the ADS-WP conducted a questionnaire on requests and proposals for the plan of Transmutation Physics Experimental Facility in the High-Intensity Proton Accelerator Project, which is a joint project between JAERI and KEK (High Energy Accelerator Research Organization). This report summarizes the results obtained by the above ADS-WP activity. (author)

  3. Cardiac Acceleration at the Onset of Exercise : A Potential Parameter for Monitoring Progress During Physical Training in Sports and Rehabilitation

    NARCIS (Netherlands)

    Hettinga, Florentina J.; Monden, Paul G.; van Meeteren, Nico L. U.; Daanen, Hein A. M.

    There is a need for easy-to-use methods to assess training progress in sports and rehabilitation research. The present review investigated whether cardiac acceleration at the onset of physical exercise (HRonset) can be used as a monitoring variable. The digital databases of Scopus and PubMed were

  4. Cardiac acceleration at the onset of exercise: A potential parameter for monitoring progress during physical training in sports and rehabilitation

    NARCIS (Netherlands)

    Hettinga, F.J.; Monden, P.G.; Meeteren, N.L.U. van; Daanen, H.A.M.

    2014-01-01

    There is a need for easy-to-use methods to assess training progress in sports and rehabilitation research. The present review investigated whether cardiac acceleration at the onset of physical exercise (HRonset) can be used as a monitoring variable. The digital databases of Scopus and PubMed were

  5. Cardiac Acceleration at the Onset of Exercise : A Potential Parameter for Monitoring Progress During Physical Training in Sports and Rehabilitation

    NARCIS (Netherlands)

    Hettinga, Florentina J.; Monden, Paul G.; van Meeteren, Nico L. U.; Daanen, Hein A. M.

    2014-01-01

    There is a need for easy-to-use methods to assess training progress in sports and rehabilitation research. The present review investigated whether cardiac acceleration at the onset of physical exercise (HRonset) can be used as a monitoring variable. The digital databases of Scopus and PubMed were se

  6. Accelerating groundwater flow simulation in MODFLOW using JASMIN-based parallel computing.

    Science.gov (United States)

    Cheng, Tangpei; Mo, Zeyao; Shao, Jingli

    2014-01-01

    To accelerate the groundwater flow simulation process, this paper reports our work on developing an efficient parallel simulator through rebuilding the well-known software MODFLOW on JASMIN (J Adaptive Structured Meshes applications Infrastructure). The rebuilding process is achieved by designing patch-based data structure and parallel algorithms as well as adding slight modifications to the compute flow and subroutines in MODFLOW. Both the memory requirements and computing efforts are distributed among all processors; and to reduce communication cost, data transfers are batched and conveniently handled by adding ghost nodes to each patch. To further improve performance, constant-head/inactive cells are tagged and neglected during the linear solving process and an efficient load balancing strategy is presented. The accuracy and efficiency are demonstrated through modeling three scenarios: The first application is a field flow problem located at Yanming Lake in China to help design reasonable quantity of groundwater exploitation. Desirable numerical accuracy and significant performance enhancement are obtained. Typically, the tagged program with load balancing strategy running on 40 cores is six times faster than the fastest MICCG-based MODFLOW program. The second test is simulating flow in a highly heterogeneous aquifer. The AMG-based JASMIN program running on 40 cores is nine times faster than the GMG-based MODFLOW program. The third test is a simplified transient flow problem with the order of tens of millions of cells to examine the scalability. Compared to 32 cores, parallel efficiency of 77 and 68% are obtained on 512 and 1024 cores, respectively, which indicates impressive scalability.

  7. Using spatial principles to optimize distributed computing for enabling the physical science discoveries.

    Science.gov (United States)

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-04-05

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.

  8. Users' guide for the Accelerated Leach Test Computer Program

    Energy Technology Data Exchange (ETDEWEB)

    Fuhrmann, M.; Heiser, J.H.; Pietrzak, R.; Franz, Eena-Mai; Colombo, P.

    1990-11-01

    This report is a step-by-step guide for the Accelerated Leach Test (ALT) Computer Program developed to accompany a new leach test for solidified waste forms. The program is designed to be used as a tool for performing the calculations necessary to analyze leach test data, a modeling program to determine if diffusion is the operating leaching mechanism (and, if not, to indicate other possible mechanisms), and a means to make extrapolations using the diffusion models. The ALT program contains four mathematical models that can be used to represent the data. The leaching mechanisms described by these models are: (1) diffusion through a semi-infinite medium (for low fractional releases), (2) diffusion through a finite cylinder (for high fractional releases), (3) diffusion plus partitioning of the source term, (4) solubility limited leaching. Results are presented as a graph containing the experimental data and the best-fit model curve. Results can also be output as LOTUS 1-2-3 files. 2 refs.

  9. On the Coupling of Fields and Particles in Accelerator and Plasma Physics

    CERN Document Server

    Geloni, Gianluca; Saldin, Evgeni

    2016-01-01

    In accelerator and plasma physics it is accepted that there is no need to solve the dynamical equations for particles in covariant form, i.e. by using the coordinate-independent proper time to parameterize particle world-lines in space-time: to describe dynamics in the laboratory frame, there is no need to use the laws of relativistic kinematics. It is sufficient to account for the relativistic dependence of particles momenta on the velocity in the second Newton's law. Then, the coupling of fields and particles is based on the use of result from particle dynamics treated according to Newton's laws in terms of the relativistic three-momentum and on the use of Maxwell's equations in standard form. Previously, we argued that this is a misconception. Here we describe in detail how to calculate the coupling between fields and particles in a correct way and how to develop a new algorithm for a particle tracking code in agreement with the use of Maxwell's equations in their standard form. Advanced textbooks on class...

  10. Physics teaching and visual deficiency: learning activities about the concept of acceleration of gravity

    Directory of Open Access Journals (Sweden)

    Eder Pires de Camargo

    2006-12-01

    Full Text Available In this paper we present the analysis of two physics teaching activities that were developed for and applied in a group of visually impaired students. The content of the activities was focused on the concept of gravitational acceleration. In the first activity the concept was explored by means of the movement of an object in an inclined plane; in the second, it was explored through the movement of a metallic disk inside a tube. Both experimental setting emitted audible signals. In this sense, all the “observational” practice were based in the audible perception of the gravitational phenomena, which permitted discussion among the students, in small groups, and a debate aiming at a general conclusion. The analysis of the data was based in a category labeled “comprehension”, which illuminated some attitudes of the students throughout the experiments such as: the sharing of ideas, the defense and arguing of meanings, and the reconstruction of meanings. As conclusions we can say that the activities were valuable for motivating the students and for giving to them some background for: (1 performing experiments; (2 observing a phenomena through an audible via; (3 collecting and analyzing data related to the variation of speed; (4 sharing, arguing and reformulating hypothesis during the discussions.

  11. Discovery Monday - Physics for medicine: the use of accelerators in therapy

    CERN Multimedia

    2004-01-01

    What does research at CERN have to do with medicine? Perhaps very little at first glance. And yet particle beams are proving to be efficient weapons in the fight against certain diseases. Doctors and physicists will explain how and why at the next Discovery Monday, to be held at Microcosm on 3 May. Various technologies and instruments will be presented during the evening. You will learn, for example, how scientists use radioisotopes to destroy tumours without damaging the surrounding tissues. You will also find out about LIBO, a small linear accelerator used for treating deep-seated tumours. Before therapy can begin, it is vital to make the right diagnosis. On this subject, radiologists will be showing how to interpret a number of X-rays, as well as teaching the youngest visitors about their anatomy and explaining how useful particle physics can be in medicine. The event will take place at Microcosm on 3rd May, from 7.30 p.m. to 9.00 p.m. Entrance free. For further information see: http://www.ce...

  12. J-PAS: The Javalambre-Physics of the Accelerated Universe Astrophysical Survey

    CERN Document Server

    Benitez, N; Moles, M; Sodre, L; Cenarro, J; Marin-Franch, A; Taylor, K; Cristobal, D; Fernandez-Soto, A; de Oliveira, C Mendes; Cepa-Nogue, J; Abramo, L R; Alcaniz, J S; Overzier, R; Hernandez-Monteagudo, C; Alfaro, E J; Kanaan, A; Carvano, J M; Reis, R R R; Gonzalez, E Martinez; Ascaso, B; Ballesteros, F; Xavier, H S; Varela, J; Ederoclite, A; Ramio, H Vazquez; Broadhurst, T; Cypriano, E; Angulo, R; Diego, J M; Zandivarez, A; Diaz, E; Melchior, P; Umetsu, K; Spinelli, P F; Zitrin, A; Coe, D; Yepes, G; Vielva, P; Sahni, V; Marcos-Caballero, A; Kitaura, F Shu; Maroto, A L; Masip, M; Tsujikawa, S; Carneiro, S; Nuevo, J Gonzalez; Carvalho, G C; Reboucas, M J; Carvalho, J C; Abdalla, E; Bernui, A; Pigozzo, C; Ferreira, E G M; Devi, N Chandrachani; Bengaly, C A P; Campista, M; Amorim, A; Asari, N V; Bongiovanni, A; Bonoli, S; Bruzual, G; Cardiel, N; Cava, A; Fernandes, R Cid; Coelho, P; Cortesi, A; Delgado, R G; Garcia, L Diaz; Espinosa, J M R; Galliano, E; Gonzalez-Serrano, J I; Falcon-Barroso, J; Fritz, J; Fernandes, C; Gorgas, J; Hoyos, C; Jimenez-Teja, Y; Lopez-Aguerri, J A; Juan, C Lopez-San; Mateus, A; Molino, A; Novais, P; OMill, A; Oteo, I; Perez-Gonzalez, P G; Poggianti, B; Proctor, R; Ricciardelli, E; Sanchez-Blazquez, P; Storchi-Bergmann, T; Telles, E; Schoennell, W; Trujillo, N; Vazdekis, A; Viironen, K; Daflon, S; Aparicio-Villegas, T; Rocha, D; Ribeiro, T; Borges, M; Martins, S L; Marcolino, W; Martinez-Delgado, D; Perez-Torres, M A; Siffert, B B; Calvao, M O; Sako, M; Kessler, R; Alvarez-Candal, A; De Pra, M; Roig, F; Lazzaro, D; Gorosabel, J; de Oliveira, R Lopes; Lima-Neto, G B; Irwin, J; Liu, J F; Alvarez, E; Balmes, I; Chueca, S; Costa-Duarte, M V; da Costa, A A; Dantas, M L L; Diaz, A Y; Fabregat, J; Ferrari, F; Gavela, B; Gracia, S G; Gruel, N; Gutierrez, J L L; Guzman, R; Hernandez-Fernandez, J D; Herranz, D; Hurtado-Gil, L; Jablonsky, F; Laporte, R; Tiran, L L Le; Licandro, J; Lima, M; Martin, E; Martinez, V; Montero, J J C; Penteado, P; Pereira, C B; Peris, V; Quilis, V; Sanchez-Portal, M; Soja, A C; Solano, E; Torra, J; Valdivielso, L

    2014-01-01

    The Javalambre-Physics of the Accelerated Universe Astrophysical Survey (J-PAS) is a narrow band, very wide field Cosmological Survey to be carried out from the Javalambre Observatory in Spain with a purpose-built, dedicated 2.5m telescope and a 4.7 sq.deg. camera with 1.2Gpix. Starting in late 2015, J-PAS will observe 8500sq.deg. of Northern Sky and measure $0.003(1+z)$ photo-z for $9\\times10^7$ LRG and ELG galaxies plus several million QSOs, sampling an effective volume of $\\sim 14$ Gpc$^3$ up to $z=1.3$ and becoming the first radial BAO experiment to reach Stage IV. J-PAS will detect $7\\times 10^5$ galaxy clusters and groups, setting constrains on Dark Energy which rival those obtained from its BAO measurements. Thanks to the superb characteristics of the site (seeing ~0.7 arcsec), J-PAS is expected to obtain a deep, sub-arcsec image of the Northern sky, which combined with its unique photo-z precision will produce one of the most powerful cosmological lensing surveys before the arrival of Euclid. J-PAS un...

  13. Comparison of dosimetric characteristics of Siemens virtual and physical wedges for ONCOR linear accelerator

    Directory of Open Access Journals (Sweden)

    Attalla Ehab

    2010-01-01

    Full Text Available Dosimetric properties of virtual wedge (VW and physical wedge (PW in 6- and 10-MV photon beams from a Siemens ONCOR linear accelerator, including wedge factors, depth doses, dose profiles, peripheral doses, are compared. While there is a great difference in absolute values of wedge factors, VW factors (VWFs and PW factors (PWFs have a similar trend as a function of field size. PWFs have stronger depth dependence than VWF due to beam hardening in PW fields. VW dose profiles in the wedge direction, in general, match very well with those of PW, except in the toe area of large wedge angles with large field sizes. Dose profiles in the nonwedge direction show a significant reduction in PW fields due to off-axis beam softening and oblique filtration. PW fields have significantly higher peripheral doses than open and VW fields. VW fields have similar surface doses as the open fields, while PW fields have lower surface doses. Surface doses for both VW and PW increase with field size and slightly with wedge angle. For VW fields with wedge angles 45° and less, the initial gap up to 3 cm is dosimetrically acceptable when compared to dose profiles of PW. VW fields in general use less monitor units than PW fields.

  14. Graphics Processing Unit-Accelerated Code for Computing Second-Order Wiener Kernels and Spike-Triggered Covariance.

    Science.gov (United States)

    Mano, Omer; Clark, Damon A

    2017-01-01

    Sensory neuroscience seeks to understand and predict how sensory neurons respond to stimuli. Nonlinear components of neural responses are frequently characterized by the second-order Wiener kernel and the closely-related spike-triggered covariance (STC). Recent advances in data acquisition have made it increasingly common and computationally intensive to compute second-order Wiener kernels/STC matrices. In order to speed up this sort of analysis, we developed a graphics processing unit (GPU)-accelerated module that computes the second-order Wiener kernel of a system's response to a stimulus. The generated kernel can be easily transformed for use in standard STC analyses. Our code speeds up such analyses by factors of over 100 relative to current methods that utilize central processing units (CPUs). It works on any modern GPU and may be integrated into many data analysis workflows. This module accelerates data analysis so that more time can be spent exploring parameter space and interpreting data.

  15. GPU Acceleration of the Locally Selfconsistent Multiple Scattering Code for First Principles Calculation of the Ground State and Statistical Physics of Materials

    Energy Technology Data Exchange (ETDEWEB)

    Eisenbach, Markus [ORNL; Larkin, Jeff [NVIDIA, Santa Clara, CA; Lutjens, Justin [NVIDIA, Santa Clara, CA; Rennich, Steven [NVIDIA, Santa Clara, CA; Rogers, James H [ORNL

    2016-01-01

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.

  16. Physics for computer science students with emphasis on atomic and semiconductor physics

    CERN Document Server

    Garcia, Narciso

    1991-01-01

    This text is the product of several years' effort to develop a course to fill a specific educational gap. It is our belief that computer science students should know how a computer works, particularly in light of rapidly changing tech­ nologies. The text was designed for computer science students who have a calculus background but have not necessarily taken prior physics courses. However, it is clearly not limited to these students. Anyone who has had first-year physics can start with Chapter 17. This includes all science and engineering students who would like a survey course of the ideas, theories, and experiments that made our modern electronics age possible. This textbook is meant to be used in a two-semester sequence. Chapters 1 through 16 can be covered during the first semester, and Chapters 17 through 28 in the second semester. At Queens College, where preliminary drafts have been used, the material is presented in three lecture periods (50 minutes each) and one recitation period per week, 15 weeks p...

  17. Evaluation of ‘OpenCL for FPGA’ for Data Acquisition and Acceleration in High Energy Physics

    Science.gov (United States)

    Sridharan, Srikanth

    2015-12-01

    The increase in the data acquisition and processing needs of High Energy Physics experiments has made it more essential to use FPGAs to meet those needs. However harnessing the capabilities of the FPGAs has been hard for anyone but expert FPGA developers. The arrival of OpenCL with the two major FPGA vendors supporting it, offers an easy software-based approach to taking advantage of FPGAs in applications such as High Energy Physics. OpenCL is a language for using heterogeneous architectures in order to accelerate applications. However, FPGAs are capable of far more than acceleration, hence it is interesting to explore if OpenCL can be used to take advantage of FPGAs for more generic applications. To answer these questions, especially in the context of High Energy Physics, two applications, a DAQ module and an acceleration workload, were tested for implementation with OpenCL on FPGAs2. The challenges on using OpenCL for a DAQ application and their solutions, together with the performance of the OpenCL based acceleration are discussed. Many of the design elements needed to realize a DAQ system in OpenCL already exists, mostly as FPGA vendor extensions, but a small number of elements were found to be missing. For acceleration of OpenCL applications, using FPGAs has become as easy as using GPUs. OpenCL has the potential for a massive gain in productivity and ease of use enabling non FPGA experts to design, debug and maintain the code. Also, FPGA power consumption is much lower than other implementations. This paper describes one of the first attempts to explore the use of OpenCL for applications outside the acceleration workloads.

  18. Separating movement and gravity components in an acceleration signal and implications for the assessment of human daily physical activity.

    Science.gov (United States)

    van Hees, Vincent T; Gorzelniak, Lukas; Dean León, Emmanuel Carlos; Eder, Martin; Pias, Marcelo; Taherian, Salman; Ekelund, Ulf; Renström, Frida; Franks, Paul W; Horsch, Alexander; Brage, Søren

    2013-01-01

    Human body acceleration is often used as an indicator of daily physical activity in epidemiological research. Raw acceleration signals contain three basic components: movement, gravity, and noise. Separation of these becomes increasingly difficult during rotational movements. We aimed to evaluate five different methods (metrics) of processing acceleration signals on their ability to remove the gravitational component of acceleration during standardised mechanical movements and the implications for human daily physical activity assessment. An industrial robot rotated accelerometers in the vertical plane. Radius, frequency, and angular range of motion were systematically varied. Three metrics (Euclidian norm minus one [ENMO], Euclidian norm of the high-pass filtered signals [HFEN], and HFEN plus Euclidean norm of low-pass filtered signals minus 1 g [HFEN+]) were derived for each experimental condition and compared against the reference acceleration (forward kinematics) of the robot arm. We then compared metrics derived from human acceleration signals from the wrist and hip in 97 adults (22-65 yr), and wrist in 63 women (20-35 yr) in whom daily activity-related energy expenditure (PAEE) was available. In the robot experiment, HFEN+ had lowest error during (vertical plane) rotations at an oscillating frequency higher than the filter cut-off frequency while for lower frequencies ENMO performed better. In the human experiments, metrics HFEN and ENMO on hip were most discrepant (within- and between-individual explained variance of 0.90 and 0.46, respectively). ENMO, HFEN and HFEN+ explained 34%, 30% and 36% of the variance in daily PAEE, respectively, compared to 26% for a metric which did not attempt to remove the gravitational component (metric EN). In conclusion, none of the metrics as evaluated systematically outperformed all other metrics across a wide range of standardised kinematic conditions. However, choice of metric explains different degrees of variance in

  19. Separating movement and gravity components in an acceleration signal and implications for the assessment of human daily physical activity.

    Directory of Open Access Journals (Sweden)

    Vincent T van Hees

    Full Text Available INTRODUCTION: Human body acceleration is often used as an indicator of daily physical activity in epidemiological research. Raw acceleration signals contain three basic components: movement, gravity, and noise. Separation of these becomes increasingly difficult during rotational movements. We aimed to evaluate five different methods (metrics of processing acceleration signals on their ability to remove the gravitational component of acceleration during standardised mechanical movements and the implications for human daily physical activity assessment. METHODS: An industrial robot rotated accelerometers in the vertical plane. Radius, frequency, and angular range of motion were systematically varied. Three metrics (Euclidian norm minus one [ENMO], Euclidian norm of the high-pass filtered signals [HFEN], and HFEN plus Euclidean norm of low-pass filtered signals minus 1 g [HFEN+] were derived for each experimental condition and compared against the reference acceleration (forward kinematics of the robot arm. We then compared metrics derived from human acceleration signals from the wrist and hip in 97 adults (22-65 yr, and wrist in 63 women (20-35 yr in whom daily activity-related energy expenditure (PAEE was available. RESULTS: In the robot experiment, HFEN+ had lowest error during (vertical plane rotations at an oscillating frequency higher than the filter cut-off frequency while for lower frequencies ENMO performed better. In the human experiments, metrics HFEN and ENMO on hip were most discrepant (within- and between-individual explained variance of 0.90 and 0.46, respectively. ENMO, HFEN and HFEN+ explained 34%, 30% and 36% of the variance in daily PAEE, respectively, compared to 26% for a metric which did not attempt to remove the gravitational component (metric EN. CONCLUSION: In conclusion, none of the metrics as evaluated systematically outperformed all other metrics across a wide range of standardised kinematic conditions. However, choice

  20. Limited-data computed tomography algorithms for the physical sciences.

    Science.gov (United States)

    Verhoeven, D

    1993-07-10

    Five limited-data computed tomography algorithms are compared. The algorithms used are adapted versions of the algebraic reconstruction technique, the multiplicative algebraic reconstruction technique, the Gerchberg-Papoulis algorithm, a spectral extrapolation algorithm descended from that of Harris [J. Opt. Soc. Am. 54, 931-936 (1964)], and an algorithm based on the singular value decomposition technique. These algorithms were used to reconstruct phantom data with realistic levels of noise from a number of different imaging geometries. The phantoms, the imaging geometries, and the noise were chosen to simulate the conditions encountered in typical computed tomography applications in the physical sciences, and the implementations of the algorithms were optimized for these applications. The multiplicative algebraic reconstruction technique algorithm gave the best results overall; the algebraic reconstruction technique gave the best results for very smooth objects or very noisy (20-dB signal-to-noise ratio) data. My implementations of both of these algorithms incorporate apriori knowledge of the sign of the object, its extent, and its smoothness. The smoothness of the reconstruction is enforced through the use of an appropriate object model (by use of cubic B-spline basis functions and a number of object coefficients appropriate to the object being reconstructed). The average reconstruction error was 1.7% of the maximum phantom value with the multiplicative algebraic reconstruction technique of a phantom with moderate-to-steep gradients by use of data from five viewing angles with a 30-dB signal-to-noise ratio.

  1. Internet computer coaches for introductory physics problem solving

    Science.gov (United States)

    Xu Ryan, Qing

    The ability to solve problems in a variety of contexts is becoming increasingly important in our rapidly changing technological society. Problem-solving is a complex process that is important for everyday life and crucial for learning physics. Although there is a great deal of effort to improve student problem solving skills throughout the educational system, national studies have shown that the majority of students emerge from such courses having made little progress toward developing good problem-solving skills. The Physics Education Research Group at the University of Minnesota has been developing Internet computer coaches to help students become more expert-like problem solvers. During the Fall 2011 and Spring 2013 semesters, the coaches were introduced into large sections (200+ students) of the calculus based introductory mechanics course at the University of Minnesota. This dissertation, will address the research background of the project, including the pedagogical design of the coaches and the assessment of problem solving. The methodological framework of conducting experiments will be explained. The data collected from the large-scale experimental studies will be discussed from the following aspects: the usage and usability of these coaches; the usefulness perceived by students; and the usefulness measured by final exam and problem solving rubric. It will also address the implications drawn from this study, including using this data to direct future coach design and difficulties in conducting authentic assessment of problem-solving.

  2. Aneesur Rahman Prize for Computational Physics Lecture: Addressing Dirac's Challenge

    Science.gov (United States)

    Chelikowsky, James

    2013-03-01

    After the invention of quantum mechanics, P. A. M. Dirac made the following observation: ``The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble. It therefore becomes desirable that approximate practical methods of applying quantum mechanics should be developed, which can lead to an explanation of the main features of complex atomic systems...'' The creation of ``approximate practical methods'' in response to Dirac's challenge has included the one electron picture, density functional theory and the pseudopotential concept. The combination of such methods in conjunction with contemporary computational platforms and new algorithms offer the possibility of predicting properties of materials solely from knowledge of the atomic species present. I will give an overview of progress in this field with an emphasis on materials at the nanoscale. Support from the Department of Energy and the National Science Foundation is acknowledged.

  3. Towards a novel laser-driven method of exotic nuclei extraction-acceleration for fundamental physics and technology

    Science.gov (United States)

    Nishiuchi, M.; Sakaki, H.; Esirkepov, T. Zh.; Nishio, K.; Pikuz, T. A.; Faenov, A. Ya.; Skobelev, I. Yu.; Orlandi, R.; Pirozhkov, A. S.; Sagisaka, A.; Ogura, K.; Kanasaki, M.; Kiriyama, H.; Fukuda, Y.; Koura, H.; Kando, M.; Yamauchi, T.; Watanabe, Y.; Bulanov, S. V.; Kondo, K.; Imai, K.; Nagamiya, S.

    2016-04-01

    A combination of a petawatt laser and nuclear physics techniques can crucially facilitate the measurement of exotic nuclei properties. With numerical simulations and laser-driven experiments we show prospects for the Laser-driven Exotic Nuclei extraction-acceleration method proposed in [M. Nishiuchi et al., Phys, Plasmas 22, 033107 (2015)]: a femtosecond petawatt laser, irradiating a target bombarded by an external ion beam, extracts from the target and accelerates to few GeV highly charged short-lived heavy exotic nuclei created in the target via nuclear reactions.

  4. Physics education through computational tools: the case of geometrical and physical optics

    Science.gov (United States)

    Rodríguez, Y.; Santana, A.; Mendoza, L. M.

    2013-09-01

    Recently, with the development of more powerful and accurate computational tools, the inclusion of new didactic materials in the classroom is known to have increased. However, the form in which these materials can be used to enhance the learning process is still under debate. Many different methodologies have been suggested for constructing new relevant curricular material and, among them, just-in-time teaching (JiTT) has arisen as an effective and successful way to improve the content of classes. In this paper, we will show the implemented pedagogic strategies for the courses of geometrical and optical physics for students of optometry. Thus, the use of the GeoGebra software for the geometrical optics class and the employment of new in-house software for the physical optics class created using the high-level programming language Python is shown with the corresponding activities developed for each of these applets.

  5. Human Pacman: A Mobile Augmented Reality Entertainment System Based on Physical, Social, and Ubiquitous Computing

    Science.gov (United States)

    Cheok, Adrian David

    This chapter details the Human Pacman system to illuminate entertainment computing which ventures to embed the natural physical world seamlessly with a fantasy virtual playground by capitalizing on infrastructure provided by mobile computing, wireless LAN, and ubiquitous computing. With Human Pacman, we have a physical role-playing computer fantasy together with real human-social and mobile-gaming that emphasizes on collaboration and competition between players in a wide outdoor physical area that allows natural wide-area human-physical movements. Pacmen and Ghosts are now real human players in the real world experiencing mixed computer graphics fantasy-reality provided by using the wearable computers on them. Virtual cookies and actual tangible physical objects are incorporated into the game play to provide novel experiences of seamless transitions between the real and virtual worlds. This is an example of a new form of gaming that anchors on physicality, mobility, social interaction, and ubiquitous computing.

  6. Proceedings of the GPU computing in high-energy physics conference 2014 GPUHEP2014

    Energy Technology Data Exchange (ETDEWEB)

    Bonati, Claudio; D' Elia, Massimo; Lamanna, Gianluca; Sozzi, Marco (eds.)

    2015-06-15

    The International Conference on GPUs in High-Energy Physics was held from September 10 to 12, 2014 at the University of Pisa, Italy. It represented a larger scale follow-up to a set of workshops which indicated the rising interest of the HEP community, experimentalists and theorists alike, towards the use of inexpensive and massively parallel computing devices, for very diverse purposes. The conference was organized in plenary sessions of invited and contributed talks, and poster presentations on the following topics: - GPUs in triggering applications - Low-level trigger systems based on GPUs - Use of GPUs in high-level trigger systems - GPUs in tracking and vertexing - Challenges for triggers in future HEP experiments - Reconstruction and Monte Carlo software on GPUs - Software frameworks and tools for GPU code integration - Hard real-time use of GPUs - Lattice QCD simulation - GPUs in phenomenology - GPUs for medical imaging purposes - GPUs in neutron and photon science - Massively parallel computations in HEP - Code parallelization. ''GPU computing in High-Energy Physics'' attracted 78 registrants to Pisa. The 38 oral presentations included talks on specific topics in experimental and theoretical applications of GPUs, as well as review talks on applications and technology. 5 posters were also presented, and were introduced by a short plenary oral illustration. A company exhibition was hosted on site. The conference consisted of 12 plenary sessions, together with a social program which included a banquet and guided excursions around Pisa. It was overall an enjoyable experience, offering an opportunity to share ideas and opinions, and getting updated on other participants' work in this emerging field, as well as being a valuable introduction for newcomers interested to learn more about the use of GPUs as accelerators for scientific progress on the elementary constituents of matter and energy.

  7. Physics, Computer Science and Mathematics Division. Annual report, 1 January--31 December 1977. [LBL, 1977

    Energy Technology Data Exchange (ETDEWEB)

    Lepore, J.V. (ed.)

    1977-01-01

    This annual report of the Physics, Computer Science and Mathematics Division describes the scientific research and other work carried out within the Division during 1977. The Division is concerned with work in experimental and theoretical physics, with computer science and applied mathematics, and with the operation of a computer center. The major physics research activity is in high-energy physics, although there is a relatively small program of medium-energy research. The High Energy Physics research program in the Physics Division is concerned with fundamental research which will enable man to comprehend the nature of the physical world. The major effort is now directed toward experiments with positron-electron colliding beam at PEP. The Medium Energy Physics program is concerned with research using mesons and nucleons to probe the properties of matter. This research is concerned with the study of nuclear structure, nuclear reactions, and the interactions between nuclei and electromagnetic radiation and mesons. The Computer Science and Applied Mathematics Department engages in research in a variety of computer science and mathematics disciplines. Work in computer science and applied mathematics includes construction of data bases, computer graphics, computational physics and data analysis, mathematical modeling, and mathematical analysis of differential and integral equations resulting from physical problems. The Computer Center provides large-scale computational support to LBL's scientific programs. Descriptions of the various activities are quite short; references to published results are given. 24 figures. (RWR)

  8. Computer simulations for a deceleration and radio frequency quadrupole instrument for accelerator ion beams

    Energy Technology Data Exchange (ETDEWEB)

    Eliades, J.A., E-mail: j.eliades@alum.utoronto.ca; Kim, J.K.; Song, J.H.; Yu, B.Y.

    2015-10-15

    Radio-frequency quadrupole (RFQ) technology incorporated into the low energy ion beam line of an accelerator system can greatly broaden the range of applications and facilitate unique experimental capabilities. However, ten’s of keV kinetic energy negative ion beams with large emittances and energy spreads must first be decelerated down to <100 eV for ion–gas interactions, placing special demands on the deceleration optics and RFQ design. A system with large analyte transmission in the presence of gas has so far proven challenging. Presented are computer simulations using SIMION 8.1 for an ion deceleration and RFQ ion guide instrument design. Code included user-defined gas pressure gradients and threshold energies for ion–gas collisional losses. Results suggest a 3 mm diameter, 35 keV {sup 36}Cl{sup −} ion beam with 8 eV full-width half maximum Gaussian energy spread and 35 mrad angular divergence can be efficiently decelerated and then cooled in He gas, with a maximum pressure of 7 mTorr, to 2 eV within 450 mm in the RFQs. Vacuum transmissions were 100%. Ion energy distributions at initial RFQ capture are shown to be much larger than the average value expected from the deceleration potential and this appears to be a general result arising from kinetic energy gain in the RFQ field. In these simulations, a potential for deceleration to 25 eV resulted in a 30 eV average energy distribution with a small fraction of ions >70 eV.

  9. Accelerator physics and radiometric properties of superconducting wavelength shifters; Beschleunigerphysik und radiometrische Eigenschaften supraleitender Wellenlaengenschieber

    Energy Technology Data Exchange (ETDEWEB)

    Scheer, Michael

    2008-11-17

    Subject of this thesis is the operation of wave-length shifters at electron storage rings and their use in radiometry. The basic aspects of the radiometry, the technical requirements, the influence of wave-length shifters on the storage ring, and results of first measurements are presented for a device installed at BESSY. Most of the calculations are carried out by the program WAVE, which has been developed within this thesis. WAVE allows to calculate the synchrotron radiation spectra of wavelength shifters within an relative uncertainty of 1/100000. The properties of wave-length shifters in terms of accelerator physics as well as a generating function for symplectic tracking calculations can also be calculated by WAVE. The later was implemented in the tracking code BETA to investigate the influence of insertion devices on the dynamic aperture and emittance of the storage ring. These studies led to the concept of alternating low- and high-beta-sections at BESSY-II, which allow to operate superconducting insertion devices without a significant distortion of the magnetic optics. To investigate the experimental aspects of the radiometry at wave-length shifters, a program based on the Monte-Carlo-code GEANT4 has been developed. It allows to simulate the radiometrical measurements and the absorption properties of detectors. With the developed codes first radiometrical measurements by the PTB have been analysed. A comparison of measurements and calculations show a reasonable agreement with deviations of about five percent in the spectral range of 40-60 keV behind a 1-mm-Cu filter. A better agreement was found between 20 keV and 80 keV without Cu filter. In this case the measured data agreed within a systematic uncertainty of two percent with the results of the calculations. (orig.)

  10. FDTD Acceleration for Cylindrical Resonator Design Based on the Hybrid of Single and Double Precision Floating-Point Computation

    Directory of Open Access Journals (Sweden)

    Hasitha Muthumala Waidyasooriya

    2014-01-01

    Full Text Available Acceleration of FDTD (finite-difference time-domain is very important for the fields such as computational electromagnetic simulation. We consider the FDTD simulation model of cylindrical resonator design that requires double precision floating-point and cannot be done using single precision. Conventional FDTD acceleration methods have a common problem of memory-bandwidth limitation due to the large amount of parallel data access. To overcome this problem, we propose a hybrid of single and double precision floating-point computation method that reduces the data-transfer amount. We analyze the characteristics of the FDTD simulation to find out when we can use single precision instead of double precision. According to the experimental results, we achieved over 15 times of speed-up compared to the CPU single-core implementation and over 1.52 times of speed-up compared to the conventional GPU-based implementation.

  11. Effects of dimensionality on computer simulations of laser-ion acceleration: When are three-dimensional simulations needed?

    Science.gov (United States)

    Yin, L.; Stark, D. J.; Albright, B. J.

    2016-10-01

    Laser-ion acceleration via relativistic induced transparency provides an effective means to accelerate ions to tens of MeV/nucleon over distances of 10s of μm. These ion sources may enable a host of applications, from fast ignition and x-rays sources to medical treatments. Understanding whether two-dimensional (2D) PIC simulations can capture the relevant 3D physics is important to the development of a predictive capability for short-pulse laser-ion acceleration and for economical design studies for applications of these accelerators. In this work, PIC simulations are performed in 3D and in 2D where the direction of the laser polarization is in the simulation plane (2D-P) and out-of-plane (2D-S). Our studies indicate modeling sensitivity to dimensionality and laser polarization. Differences arise in energy partition, electron heating, ion peak energy, and ion spectral shape. 2D-P simulations are found to over-predict electron heating and ion peak energy. The origin of these differences and the extent to which 2D simulations may capture the key acceleration dynamics will be discussed. Work performed under the auspices of the U.S. DOE by the LANS, LLC, Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396. Funding provided by the Los Alamos National Laboratory Directed Research and Development Program.

  12. Petascale computation of multi-physics seismic simulations

    Science.gov (United States)

    Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Wollherr, Stephanie; Duru, Kenneth C.

    2017-04-01

    Capturing the observed complexity of earthquake sources in concurrence with seismic wave propagation simulations is an inherently multi-scale, multi-physics problem. In this presentation, we present simulations of earthquake scenarios resolving high-detail dynamic rupture evolution and high frequency ground motion. The simulations combine a multitude of representations of model complexity; such as non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and fault strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure to capture dynamic rupture behavior at the source; and seismic wave attenuation, 3D subsurface structure and bathymetry impacting seismic wave propagation. Performing such scenarios at the necessary spatio-temporal resolution requires highly optimized and massively parallel simulation tools which can efficiently exploit HPC facilities. Our up to multi-PetaFLOP simulations are performed with SeisSol (www.seissol.org), an open-source software package based on an ADER-Discontinuous Galerkin (DG) scheme solving the seismic wave equations in velocity-stress formulation in elastic, viscoelastic, and viscoplastic media with high-order accuracy in time and space. Our flux-based implementation of frictional failure remains free of spurious oscillations. Tetrahedral unstructured meshes allow for complicated model geometry. SeisSol has been optimized on all software levels, including: assembler-level DG kernels which obtain 50% peak performance on some of the largest supercomputers worldwide; an overlapping MPI-OpenMP parallelization shadowing the multiphysics computations; usage of local time stepping; parallel input and output schemes and direct interfaces to community standard data formats. All these factors enable aim to minimise the time-to-solution. The results presented highlight the fact that modern numerical methods and hardware-aware optimization for modern supercomputers are essential

  13. Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems

    Science.gov (United States)

    2017-04-13

    AFRL-AFOSR-UK-TR-2017-0029 Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems ...MultiCore Systems 5a. CONTRACT NUMBER FA8655-12-1-2021 5b. GRANT NUMBER Grant 12-2021 5c. PROGRAM ELEMENT NUMBER 61102F 6. AUTHOR(S...code for Heterogeneous multicore systems . The approach was based on the OmpSs programming model and the performance tools that constitute two strategic

  14. Advances in computed radiography systems and their physical imaging characteristics.

    Science.gov (United States)

    Cowen, A R; Davies, A G; Kengyelics, S M

    2007-12-01

    Radiological imaging is progressing towards an all-digital future, across the spectrum of medical imaging techniques. Computed radiography (CR) has provided a ready pathway from screen film to digital radiography and a convenient entry point to PACS. This review briefly revisits the principles of modern CR systems and their physical imaging characteristics. Wide dynamic range and digital image enhancement are well-established benefits of CR, which lend themselves to improved image presentation and reduced rates of repeat exposures. However, in its original form CR offered limited scope for reducing the radiation dose per radiographic exposure, compared with screen film. Recent innovations in CR, including the use of dual-sided image readout and channelled storage phosphor have eased these concerns. For example, introduction of these technologies has improved detective quantum efficiency (DQE) by approximately 50 and 100%, respectively, compared with standard CR. As a result CR currently affords greater scope for reducing patient dose, and provides a more substantive challenge to the new solid-state, flat-panel, digital radiography detectors.

  15. Physics-Based Computational Algorithm for the Multi-Fluid Plasma Model

    Science.gov (United States)

    2014-06-30

    Riemann solver for the two-fluid plasma model. Journal of Computational Physics , 187(2):620–638, 2003. [23] Jeffrey P. Freidberg. Ideal...Computational Physics , 141(2):199–224, 1998. [52] P. L. Roe. Approximate Riemann solvers, parameter vectors and difference schemes. Journal of...AFRL-OSR-VA-TR-2014-0310 PHYSICS -BASED COMPUTATIONAL ALGORITHM FOR THE MULTIFLUID PLASMA MODEL Uri Shumlak UNIVERSITY OF WASHINGTON Final Report 10

  16. Deep learning for teaching university physics to computers

    Science.gov (United States)

    Davis, Jackson P.; Price, Watt A.

    2017-04-01

    Attempts to improve physics instruction suggest that there is a fundamental barrier to the human learning of physics. We argue that the new capabilities of artificial intelligence justify a reconsideration not of how we teach physics but to whom we teach physics.

  17. FUNCTIONALITY OF STUDENTS WITH PHYSICAL DEFICIENCY IN WRITING AND COMPUTER USE ACTIVITIES

    National Research Council Canada - National Science Library

    Fernanda Matrigani Mercado Gutierres de Queiroz; Lígia Maria Presumido Braccialli

    2017-01-01

    ... in: Describe the functionality of students with physical disabilities, in the Multifunctional Resource Rooms, for activities of writing and computer use, according to the perception of the teachers...

  18. Report of the first three years of the initiative in accelerator R and D for particle physics

    CERN Document Server

    Norton, P

    2002-01-01

    The 'Particle Accelerators for Particle Physics' initiative was jointly funded by PPARC and CCLRC for three years, starting in 1999. The main objective was to re-create and re-establish within the UK the tradition of research and development into accelerator technology for particle physics applications, either at the high energy or the high luminosity frontier. This would enable the UK to evaluate for itself how it would wish to contribute to future international particle physics facilities, with the option of making the contribution to the machine 'in kind' (as is done with the detectors). The major international particle physics laboratories (CERN, DESY, FNAL, SLAC) with whom the UK particle physics community has worked over the past twenty years warmly welcomed this initiative. The scientific case for the next generation of facilities beyond the LHC, HERA, the B-factories and the Tevatron is now well developed. There is general agreement, underlined by the OECD Global Science Forum report by the Consultati...

  19. Conceptual designs of two petawatt-class pulsed-power accelerators for high-energy-density-physics experiments

    Science.gov (United States)

    Stygar, W. A.; Awe, T. J.; Bailey, J. E.; Bennett, N. L.; Breden, E. W.; Campbell, E. M.; Clark, R. E.; Cooper, R. A.; Cuneo, M. E.; Ennis, J. B.; Fehl, D. L.; Genoni, T. C.; Gomez, M. R.; Greiser, G. W.; Gruner, F. R.; Herrmann, M. C.; Hutsel, B. T.; Jennings, C. A.; Jobe, D. O.; Jones, B. M.; Jones, M. C.; Jones, P. A.; Knapp, P. F.; Lash, J. S.; LeChien, K. R.; Leckbee, J. J.; Leeper, R. J.; Lewis, S. A.; Long, F. W.; Lucero, D. J.; Madrid, E. A.; Martin, M. R.; Matzen, M. K.; Mazarakis, M. G.; McBride, R. D.; McKee, G. R.; Miller, C. L.; Moore, J. K.; Mostrom, C. B.; Mulville, T. D.; Peterson, K. J.; Porter, J. L.; Reisman, D. B.; Rochau, G. A.; Rochau, G. E.; Rose, D. V.; Rovang, D. C.; Savage, M. E.; Sceiford, M. E.; Schmit, P. F.; Schneider, R. F.; Schwarz, J.; Sefkow, A. B.; Sinars, D. B.; Slutz, S. A.; Spielman, R. B.; Stoltzfus, B. S.; Thoma, C.; Vesey, R. A.; Wakeland, P. E.; Welch, D. R.; Wisher, M. L.; Woodworth, J. R.

    2015-11-01

    We have developed conceptual designs of two petawatt-class pulsed-power accelerators: Z 300 and Z 800. The designs are based on an accelerator architecture that is founded on two concepts: single-stage electrical-pulse compression and impedance matching [Phys. Rev. ST Accel. Beams 10, 030401 (2007)]. The prime power source of each machine consists of 90 linear-transformer-driver (LTD) modules. Each module comprises LTD cavities connected electrically in series, each of which is powered by 5-GW LTD bricks connected electrically in parallel. (A brick comprises a single switch and two capacitors in series.) Six water-insulated radial-transmission-line impedance transformers transport the power generated by the modules to a six-level vacuum-insulator stack. The stack serves as the accelerator's water-vacuum interface. The stack is connected to six conical outer magnetically insulated vacuum transmission lines (MITLs), which are joined in parallel at a 10-cm radius by a triple-post-hole vacuum convolute. The convolute sums the electrical currents at the outputs of the six outer MITLs, and delivers the combined current to a single short inner MITL. The inner MITL transmits the combined current to the accelerator's physics-package load. Z 300 is 35 m in diameter and stores 48 MJ of electrical energy in its LTD capacitors. The accelerator generates 320 TW of electrical power at the output of the LTD system, and delivers 48 MA in 154 ns to a magnetized-liner inertial-fusion (MagLIF) target [Phys. Plasmas 17, 056303 (2010)]. The peak electrical power at the MagLIF target is 870 TW, which is the highest power throughout the accelerator. Power amplification is accomplished by the centrally located vacuum section, which serves as an intermediate inductive-energy-storage device. The principal goal of Z 300 is to achieve thermonuclear ignition; i.e., a fusion yield that exceeds the energy transmitted by the accelerator to the liner. 2D magnetohydrodynamic (MHD) simulations

  20. J-PAS: The Javalambre Physics of the Accelerated Universe Astrophysical Survey

    Science.gov (United States)

    Cepa, J.; Benítez, N.; Dupke, R.; Moles, M.; Sodré, L.; Cenarro, A. J.; Marín-Franch, A.; Taylor, K.; Cristóbal, D.; Fernández-Soto, A.; Mendes de Oliveira, C.; Abramo, L. R.; Alcaniz, J. S.; Overzier, R.; Hernández-Monteagudo, A.; Alfaro, E. J.; Kanaan, A.; Carvano, M.; Reis, R. R. R.; J-PAS Team

    2016-10-01

    The Javalambre Physics of the Accelerated Universe Astrophysical Survey (J-PAS) is a narrow band, very wide field Cosmological Survey to be carried out from the Javalambre Observatory in Spain with a purpose-built, dedicated 2.5 m telescope and a 4.7 sq.deg. camera with 1.2 Gpix. Starting in late 2016, J-PAS will observe 8500 sq.deg. of Northern Sky and measure Δz˜0.003(1+z) photo-z for 9× 107 LRG and ELG galaxies plus several million QSOs, sampling an effective volume of ˜ 14 Gpc3 up to z=1.3 and becoming the first radial BAO experiment to reach Stage IV. J-PAS will detect 7× 105 galaxy clusters and groups, setting constraints on Dark Energy which rival those obtained from its BAO measurements. Thanks to the superb characteristics of the site (seeing ˜ 0.7 arcsec), J-PAS is expected to obtain a deep, sub-arcsec image of the Northern sky, which combined with its unique photo-z precision will produce one of the most powerful cosmological lensing surveys before the arrival of Euclid. J-PAS's unprecedented spectral time domain information will enable a self-contained SN survey that, without the need for external spectroscopic follow-up, will detect, classify and measure σz˜ 0.5 redshifts for ˜ 4000 SNeIa and ˜ 900 core-collapse SNe. The key to the J-PAS potential is its innovative approach: a contiguous system of 54 filters with 145 Å width, placed 100 Å apart over a multi-degree FoV is a powerful redshift machine, with the survey speed of a 4000 multiplexing low resolution spectrograph, but many times cheaper and much faster to build. The J-PAS camera is equivalent to a 4.7 sq.deg. IFU and it will produce a time-resolved, 3D image of the Northern Sky with a very wide range of Astrophysical applications in Galaxy Evolution, the nearby Universe and the study of resolved stellar populations.

  1. Performance analysis and acceleration of cross-correlation computation using FPGA implementation for digital signal processing

    Science.gov (United States)

    Selma, R.

    2016-09-01

    Paper describes comparison of cross-correlation computation speed of most commonly used computation platforms (CPU, GPU) with an FPGA-based design. It also describes the structure of cross-correlation unit implemented for testing purposes. Speedup of computations was achieved using FPGA-based design, varying between 16 and 5400 times compared to CPU computations and between 3 and 175 times compared to GPU computations.

  2. Contribution to the algorithmic and efficient programming of new parallel architectures including accelerators for neutron physics and shielding computations; Contribution a l'algorithmique et a la programmation efficace des nouvelles architectures paralleles comportant des accelerateurs de calcul dans le domaine de la neutronique et de la radioprotection

    Energy Technology Data Exchange (ETDEWEB)

    Dubois, J.

    2011-10-13

    In science, simulation is a key process for research or validation. Modern computer technology allows faster numerical experiments, which are cheaper than real models. In the field of neutron simulation, the calculation of eigenvalues is one of the key challenges. The complexity of these problems is such that a lot of computing power may be necessary. The work of this thesis is first the evaluation of new computing hardware such as graphics card or massively multi-core chips, and their application to eigenvalue problems for neutron simulation. Then, in order to address the massive parallelism of supercomputers national, we also study the use of asynchronous hybrid methods for solving eigenvalue problems with this very high level of parallelism. Then we experiment the work of this research on several national supercomputers such as the Titane hybrid machine of the Computing Center, Research and Technology (CCRT), the Curie machine of the Very Large Computing Centre (TGCC), currently being installed, and the Hopper machine at the Lawrence Berkeley National Laboratory (LBNL). We also do our experiments on local workstations to illustrate the interest of this research in an everyday use with local computing resources. (author) [French] Les travaux de cette these concernent dans un premier temps l'evaluation des nouveaux materiels de calculs tels que les cartes graphiques ou les puces massivement multicoeurs, et leur application aux problemes de valeurs propres pour la neutronique. Ensuite, dans le but d'utiliser le parallelisme massif des supercalculateurs, nous etudions egalement l'utilisation de methodes hybrides asynchrones pour resoudre des problemes a valeur propre avec ce tres haut niveau de parallelisme. Nous experimentons ensuite le resultat de ces recherches sur plusieurs supercalculateurs nationaux tels que la machine hybride Titane du Centre de Calcul, Recherche et Technologies (CCRT), la machine Curie du Tres Grand Centre de Calcul (TGCC) qui

  3. Physics design of a 100 keV acceleration grid system for the diagnostic neutral beam for international tokamak experimental reactor.

    Science.gov (United States)

    Singh, M J; De Esch, H P L

    2010-01-01

    This paper describes the physics design of a 100 keV, 60 A H(-) accelerator for the diagnostic neutral beam (DNB) for international tokamak experimental reactor (ITER). The accelerator is a three grid system comprising of 1280 apertures, grouped in 16 groups with 80 apertures per beam group. Several computer codes have been used to optimize the design which follows the same philosophy as the ITER Design Description Document (DDD) 5.3 and the 1 MeV heating and current drive beam line [R. Hemsworth, H. Decamps, J. Graceffa, B. Schunke, M. Tanaka, M. Dremel, A. Tanga, H. P. L. De Esch, F. Geli, J. Milnes, T. Inoue, D. Marcuzzi, P. Sonato, and P. Zaccaria, Nucl. Fusion 49, 045006 (2009)]. The aperture shapes, intergrid distances, and the extractor voltage have been optimized to minimize the beamlet divergence. To suppress the acceleration of coextracted electrons, permanent magnets have been incorporated in the extraction grid, downstream of the cooling water channels. The electron power loads on the extractor and the grounded grids have been calculated assuming 1 coextracted electron per ion. The beamlet divergence is calculated to be 4 mrad. At present the design for the filter field of the RF based ion sources for ITER is not fixed, therefore a few configurations of the same have been considered. Their effect on the transmission of the electrons and beams through the accelerator has been studied. The OPERA-3D code has been used to estimate the aperture offset steering constant of the grounded grid and the extraction grid, the space charge interaction between the beamlets and the kerb design required to compensate for this interaction. All beamlets in the DNB must be focused to a single point in the duct, 20.665 m from the grounded grid, and the required geometrical aimings and aperture offsets have been calculated.

  4. Physics design of a 100 keV acceleration grid system for the diagnostic neutral beam for international tokamak experimental reactor

    Science.gov (United States)

    Singh, M. J.; De Esch, H. P. L.

    2010-01-01

    This paper describes the physics design of a 100 keV, 60 A H- accelerator for the diagnostic neutral beam (DNB) for international tokamak experimental reactor (ITER). The accelerator is a three grid system comprising of 1280 apertures, grouped in 16 groups with 80 apertures per beam group. Several computer codes have been used to optimize the design which follows the same philosophy as the ITER Design Description Document (DDD) 5.3 and the 1 MeV heating and current drive beam line [R. Hemsworth, H. Decamps, J. Graceffa, B. Schunke, M. Tanaka, M. Dremel, A. Tanga, H. P. L. De Esch, F. Geli, J. Milnes, T. Inoue, D. Marcuzzi, P. Sonato, and P. Zaccaria, Nucl. Fusion 49, 045006 (2009)]. The aperture shapes, intergrid distances, and the extractor voltage have been optimized to minimize the beamlet divergence. To suppress the acceleration of coextracted electrons, permanent magnets have been incorporated in the extraction grid, downstream of the cooling water channels. The electron power loads on the extractor and the grounded grids have been calculated assuming 1 coextracted electron per ion. The beamlet divergence is calculated to be 4 mrad. At present the design for the filter field of the RF based ion sources for ITER is not fixed, therefore a few configurations of the same have been considered. Their effect on the transmission of the electrons and beams through the accelerator has been studied. The OPERA-3D code has been used to estimate the aperture offset steering constant of the grounded grid and the extraction grid, the space charge interaction between the beamlets and the kerb design required to compensate for this interaction. All beamlets in the DNB must be focused to a single point in the duct, 20.665 m from the grounded grid, and the required geometrical aimings and aperture offsets have been calculated.

  5. The Physics of Teams: Interdependence, Measurable Entropy, and Computational Emotion

    Directory of Open Access Journals (Sweden)

    William F. Lawless

    2017-08-01

    Full Text Available Most of the social sciences, including psychology, economics, and subjective social network theory, are modeled on the individual, leaving the field not only a-theoretical, but also inapplicable to a physics of hybrid teams, where hybrid refers to arbitrarily combining humans, machines, and robots into a team to perform a dedicated mission (e.g., military, business, entertainment or to solve a targeted problem (e.g., with scientists, engineers, entrepreneurs. As a common social science practice, the ingredient at the heart of the social interaction, interdependence, is statistically removed prior to the replication of social experiments; but, as an analogy, statistically removing social interdependence to better study the individual is like statistically removing quantum effects as a complication to the study of the atom. Further, in applications of Shannon's information theory to teams, the effects of interdependence are minimized, but even there, interdependence is how classical information is transmitted. Consequently, numerous mistakes are made when applying non-interdependent models to policies, the law and regulations, impeding social welfare by failing to exploit the power of social interdependence. For example, adding redundancy to human teams is thought by subjective social network theorists to improve the efficiency of a network, easily contradicted by our finding that redundancy is strongly associated with corruption in non-free markets. Thus, built atop the individual, most of the social sciences, economics, and social network theory have little if anything to contribute to the engineering of hybrid teams. In defense of the social sciences, the mathematical physics of interdependence is elusive, non-intuitive and non-rational. However, by replacing determinism with bistable states, interdependence at the social level mirrors entanglement at the quantum level, suggesting the applicability of quantum tools for social science. We report

  6. 2015 Final Reports from the Los Alamos National Laboratory Computational Physics Student Summer Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Runnels, Scott Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Caldwell, Wendy [Arizona State Univ., Mesa, AZ (United States); Brown, Barton Jed [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pederson, Clark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Justin [Univ. of California, Santa Cruz, CA (United States); Burrill, Daniel [Univ. of Vermont, Burlington, VT (United States); Feinblum, David [Univ. of California, Irvine, CA (United States); Hyde, David [SLAC National Accelerator Lab., Menlo Park, CA (United States). Stanford Institute for Materials and Energy Science (SIMES); Levick, Nathan [Univ. of New Mexico, Albuquerque, NM (United States); Lyngaas, Isaac [Florida State Univ., Tallahassee, FL (United States); Maeng, Brad [Univ. of Michigan, Ann Arbor, MI (United States); Reed, Richard LeRoy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sarno-Smith, Lois [Univ. of Michigan, Ann Arbor, MI (United States); Shohet, Gil [Univ. of Illinois, Urbana-Champaign, IL (United States); Skarda, Jinhie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Stevens, Josey [Missouri Univ. of Science and Technology, Rolla, MO (United States); Zeppetello, Lucas [Columbia Univ., New York, NY (United States); Grossman-Ponemon, Benjamin [Stanford Univ., CA (United States); Bottini, Joseph Larkin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Loudon, Tyson Shane [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); VanGessel, Francis Gilbert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Nagaraj, Sriram [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Price, Jacob [Univ. of Washington, Seattle, WA (United States)

    2015-10-15

    The two primary purposes of LANL’s Computational Physics Student Summer Workshop are (1) To educate graduate and exceptional undergraduate students in the challenges and applications of computational physics of interest to LANL, and (2) Entice their interest toward those challenges. Computational physics is emerging as a discipline in its own right, combining expertise in mathematics, physics, and computer science. The mathematical aspects focus on numerical methods for solving equations on the computer as well as developing test problems with analytical solutions. The physics aspects are very broad, ranging from low-temperature material modeling to extremely high temperature plasma physics, radiation transport and neutron transport. The computer science issues are concerned with matching numerical algorithms to emerging architectures and maintaining the quality of extremely large codes built to perform multi-physics calculations. Although graduate programs associated with computational physics are emerging, it is apparent that the pool of U.S. citizens in this multi-disciplinary field is relatively small and is typically not focused on the aspects that are of primary interest to LANL. Furthermore, more structured foundations for LANL interaction with universities in computational physics is needed; historically interactions rely heavily on individuals’ personalities and personal contacts. Thus a tertiary purpose of the Summer Workshop is to build an educational network of LANL researchers, university professors, and emerging students to advance the field and LANL’s involvement in it. This report includes both the background for the program and the reports from the students.

  7. 2016 Final Reports from the Los Alamos National Laboratory Computational Physics Student Summer Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Runnels, Scott Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bachrach, Harrison Ian [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Carlson, Nils [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Collier, Angela [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dumas, William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Fankell, Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ferris, Natalie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gonzalez, Francisco [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Griffith, Alec [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Guston, Brandon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kenyon, Connor [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Li, Benson [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Mookerjee, Adaleena [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parkinson, Christian [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Peck, Hailee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Peters, Evan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Poondla, Yasvanth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rogers, Brandon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shaffer, Nathaniel [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Trettel, Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Valaitis, Sonata Mae [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Venzke, Joel Aaron [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Black, Mason [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Demircan, Samet [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Holladay, Robert Tyler [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-22

    The two primary purposes of LANL’s Computational Physics Student Summer Workshop are (1) To educate graduate and exceptional undergraduate students in the challenges and applications of computational physics of interest to LANL, and (2) Entice their interest toward those challenges. Computational physics is emerging as a discipline in its own right, combining expertise in mathematics, physics, and computer science. The mathematical aspects focus on numerical methods for solving equations on the computer as well as developing test problems with analytical solutions. The physics aspects are very broad, ranging from low-temperature material modeling to extremely high temperature plasma physics, radiation transport and neutron transport. The computer science issues are concerned with matching numerical algorithms to emerging architectures and maintaining the quality of extremely large codes built to perform multi-physics calculations. Although graduate programs associated with computational physics are emerging, it is apparent that the pool of U.S. citizens in this multi-disciplinary field is relatively small and is typically not focused on the aspects that are of primary interest to LANL. Furthermore, more structured foundations for LANL interaction with universities in computational physics is needed; historically interactions rely heavily on individuals’ personalities and personal contacts. Thus a tertiary purpose of the Summer Workshop is to build an educational network of LANL researchers, university professors, and emerging students to advance the field and LANL’s involvement in it.

  8. Towards a novel laser-driven method of exotic nuclei extraction-acceleration for fundamental physics and technology

    CERN Document Server

    Nishiuchi, Mamiko; Nishio, Katsuhisa; Orlandi, Riccard; Sako, Hiroyuki; Pikuz, Tatiana A; Faenov, Anatory Ya; Esirkepov, Timur Zh; Pirozhkov, Alexander S; Matsukawa, Kenya; Sagisaka, Akito; Ogura, Koichi; Kanasaki, Masato; Kiriyama, Hiromitsu; Fukuda, Yuji; Koura, Hiroyuki; Kando, Masaki; Yamauchi, Tomoya; Watanabe, Yukinobu; Bulanov, Sergei V; Kondo, Kiminori; Imai, Kenichi; Nagamiya, Shoji

    2014-01-01

    The measurement of properties of exotic nuclei, essential for fundamental nuclear physics, now confronts a formidable challenge for contemporary radiofrequency accelerator technology. A promising option can be found in the combination of state-of-the-art high-intensity short pulse laser system and nuclear measurement techniques. We propose a novel Laser-driven Exotic Nuclei extraction-acceleration method (LENex): a femtosecond petawatt laser, irradiating a target bombarded by an external ion beam, extracts from the target and accelerates to few GeV highly-charged nuclear reaction products. Here a proof-of-principle experiment of LENex is presented: a few hundred-terawatt laser focused onto an aluminum foil, with a small amount of iron simulating nuclear reaction products, extracts almost fully stripped iron nuclei and accelerate them up to 0.9 GeV. Our experiments and numerical simulations show that short-lived, heavy exotic nuclei, with a much larger charge-to-mass ratio than in conventional technology, can ...

  9. Analog computation through high-dimensional physical chaotic neuro-dynamics

    Science.gov (United States)

    Horio, Yoshihiko; Aihara, Kazuyuki

    2008-07-01

    Conventional von Neumann computers have difficulty in solving complex and ill-posed real-world problems. However, living organisms often face such problems in real life, and must quickly obtain suitable solutions through physical, dynamical, and collective computations involving vast assemblies of neurons. These highly parallel computations through high-dimensional dynamics (computation through dynamics) are completely different from the numerical computations on von Neumann computers (computation through algorithms). In this paper, we explore a novel computational mechanism with high-dimensional physical chaotic neuro-dynamics. We physically constructed two hardware prototypes using analog chaotic-neuron integrated circuits. These systems combine analog computations with chaotic neuro-dynamics and digital computation through algorithms. We used quadratic assignment problems (QAPs) as benchmarks. The first prototype utilizes an analog chaotic neural network with 800-dimensional dynamics. An external algorithm constructs a solution for a QAP using the internal dynamics of the network. In the second system, 300-dimensional analog chaotic neuro-dynamics drive a tabu-search algorithm. We demonstrate experimentally that both systems efficiently solve QAPs through physical chaotic dynamics. We also qualitatively analyze the underlying mechanism of the highly parallel and collective analog computations by observing global and local dynamics. Furthermore, we introduce spatial and temporal mutual information to quantitatively evaluate the system dynamics. The experimental results confirm the validity and efficiency of the proposed computational paradigm with the physical analog chaotic neuro-dynamics.

  10. Analyzing Log Files to Predict Students' Problem Solving Performance in a Computer-Based Physics Tutor

    Science.gov (United States)

    Lee, Young-Jin

    2015-01-01

    This study investigates whether information saved in the log files of a computer-based tutor can be used to predict the problem solving performance of students. The log files of a computer-based physics tutoring environment called Andes Physics Tutor was analyzed to build a logistic regression model that predicted success and failure of students'…

  11. Physics Perspectives for a Future Circular Collider: FCC-hh - Accelerator & Detectors

    CERN Document Server

    CERN. Geneva

    2017-01-01

    The lectures will briefly discuss the parameters of a Future Circular Collider, before addressing in detail the physics perspectives and the challenges for the experiments and detector systems. The main focus will be on ee and pp collisions, but opportunities for e—p physics will also be covered. The FCC physics perspectives will be presented with reference to the ongoing LHC programme, including the physics potential from future upgrades to the LHC in luminosity and possibly energy.

  12. Physics of Phase Space Matching for Staging Plasma and Traditional Accelerator Components Using Longitudinally Tailored Plasma Profiles.

    Science.gov (United States)

    Xu, X L; Hua, J F; Wu, Y P; Zhang, C J; Li, F; Wan, Y; Pai, C-H; Lu, W; An, W; Yu, P; Hogan, M J; Joshi, C; Mori, W B

    2016-03-25

    Phase space matching between two plasma-based accelerator (PBA) stages and between a PBA and a traditional accelerator component is a critical issue for emittance preservation. The drastic differences of the transverse focusing strengths as the beam propagates between stages and components may lead to a catastrophic emittance growth even when there is a small energy spread. We propose using the linear focusing forces from nonlinear wakes in longitudinally tailored plasma density profiles to control phase space matching between sections with negligible emittance growth. Several profiles are considered and theoretical analysis and particle-in-cell simulations show how these structures may work in four different scenarios. Good agreement between theory and simulation is obtained, and it is found that the adiabatic approximation misses important physics even for long profiles.

  13. Physics design of a CW high-power proton Linac for accelerator-driven system

    Indian Academy of Sciences (India)

    Rajni Pande; Shweta Roy; S V L S Rao; P Singh; S Kailas

    2012-02-01

    Accelerator-driven systems (ADS) have evoked lot of interest the world over because of their capability to incinerate the MA (minor actinides) and LLFP (long-lived fission products) radiotoxic waste and their ability to utilize thorium as an alternative nuclear fuel. One of the main subsystems of ADS is a high energy (∼1 GeV) and high current (∼30 mA) CW proton Linac. The accelerator for ADS should have high efficiency and reliability and very low beam losses to allow hands-on maintenance. With these criteria, the beam dynamics simulations for a 1 GeV, 30 mA proton Linac has been done. The Linac consists of normal-conducting radio-frequency quadrupole (RFQ), drift tube linac (DTL) and coupled cavity drift tube Linac (CCDTL) structures that accelerate the beam to about 100 MeV followed by superconducting (SC) elliptical cavities, which accelerate the beam from 100 MeV to 1 GeV. The details of the design are presented in this paper.

  14. The quantum measurement problem and physical reality: a computation theoretic perspective

    CERN Document Server

    Srikanth, R

    2006-01-01

    Is the universe computable? If yes, is it computationally a polynomial place? In standard quantum mechanics, which permits infinite parallelism and the infinitely precise specification of states, a negative answer to both questions is not ruled out. On the other hand, computational problems for which no efficient algorithm is known do not seem to be efficiently solvable by any physical means; likewise, problems known to be algorithmically uncomputable do not seem to be computable by any physical means. We suggest that this close correspondence between the efficiency and power of abstract algorithms on the one hand, and physical computers on the other, can be explained by assuming that the universe is algorithmic; that is, that physical reality is the product of discrete sub-physical information processing equivalent to the actions of probabilistic Turing machines. Support for this viewpoint comes from a recently proposed model of quantum measurement, according to which classicality arises from a finite upper ...

  15. Observed differences in upper extremity forces, muscle efforts, postures, velocities and accelerations across computer activities in a field study of office workers

    NARCIS (Netherlands)

    Garza, J.L.B.; Eijckelhof, B.H.W.; Johnson, P.W.; Raina, S.M.; Rynell, P.W.; Huysmans, M.A.; Dieën, J.H. van; Beek, A.J. van der; Blatter, B.M.; Dennerlein, J.T.

    2012-01-01

    This study, a part of the PRedicting Occupational biomechanics in OFfice workers (PROOF) study, investigated whether there are differences in field-measured forces, muscle efforts, postures, velocities and accelerations across computer activities. These parameters were measured continuously for 120

  16. Observed differences in upper extremity forces, muscle efforts, postures, velocities and accelerations across computer activities in a field study of office workers

    NARCIS (Netherlands)

    Garza, J.L.B.; Eijckelhof, B.H.W.; Johnson, P.W.; Raina, S.M.; Rynell, P.W.; Huysmans, M.A.; Dieën, J.H. van; Beek, A.J. van der; Blatter, B.M.; Dennerlein, J.T.

    2012-01-01

    This study, a part of the PRedicting Occupational biomechanics in OFfice workers (PROOF) study, investigated whether there are differences in field-measured forces, muscle efforts, postures, velocities and accelerations across computer activities. These parameters were measured continuously for 120

  17. A computational approach for identifying the chemical factors involved in the glycosaminoglycans-mediated acceleration of amyloid fibril formation.

    Directory of Open Access Journals (Sweden)

    Elodie Monsellier

    Full Text Available BACKGROUND: Amyloid fibril formation is the hallmark of many human diseases, including Alzheimer's disease, type II diabetes and amyloidosis. Amyloid fibrils deposit in the extracellular space and generally co-localize with the glycosaminoglycans (GAGs of the basement membrane. GAGs have been shown to accelerate the formation of amyloid fibrils in vitro for a number of protein systems. The high number of data accumulated so far has created the grounds for the construction of a database on the effects of a number of GAGs on different proteins. METHODOLOGY/PRINCIPAL FINDINGS: In this study, we have constructed such a database and have used a computational approach that uses a combination of single parameter and multivariate analyses to identify the main chemical factors that determine the GAG-induced acceleration of amyloid formation. We show that the GAG accelerating effect is mainly governed by three parameters that account for three-fourths of the observed experimental variability: the GAG sulfation state, the solute molarity, and the ratio of protein and GAG molar concentrations. We then combined these three parameters into a single equation that predicts, with reasonable accuracy, the acceleration provided by a given GAG in a given condition. CONCLUSIONS/SIGNIFICANCE: In addition to shedding light on the chemical determinants of the protein:GAG interaction and to providing a novel mathematical predictive tool, our findings highlight the possibility that GAGs may not have such an accelerating effect on protein aggregation under the conditions existing in the basement membrane, given the values of salt molarity and protein:GAG molar ratio existing under such conditions.

  18. A Model for Integrating Computation in Undergraduate Physics: An example from middle-division classical mechanics

    CERN Document Server

    Caballero, Marcos D

    2013-01-01

    Much of the research done by modern physicists would be impossible without the use of computation. And yet, while computation is a crucial tool of practicing physicists, physics curricula do not generally reflect its importance and utility. To more tightly connect undergraduate preparation with professional practice, we integrated computational instruction into middle-division classical mechanics at the University of Colorado Boulder. Our model for integration includes the construction of computational learning goals, the design of computational activities consistent with those goals, and the assessment of students' computational fluency. To assess students' computational fluency, we used open-ended computational projects in which students prepared reports describing a physical problem of their choosing. Many students chose projects from outside the domain of the course, and therefore, had to employ mathematical and computational techniques they had not yet been taught. After completing the project, most stud...

  19. Final Report for "Non-Accelerator Physics – Research in High Energy Physics: Dark Energy Research on DES"

    Energy Technology Data Exchange (ETDEWEB)

    Ritz, Steve [Univ. of California, Santa Cruz, CA (United States); Jeltema, Tesla [Univ. of California, Santa Cruz, CA (United States)

    2016-12-01

    One of the greatest mysteries in modern cosmology is the fact that the expansion of the universe is observed to be accelerating. This acceleration may stem from dark energy, an additional energy component of the universe, or may indicate that the theory of general relativity is incomplete on cosmological scales. The growth rate of large-scale structure in the universe and particularly the largest collapsed structures, clusters of galaxies, is highly sensitive to the underlying cosmology. Clusters will provide one of the single most precise methods of constraining dark energy with the ongoing Dark Energy Survey (DES). The accuracy of the cosmological constraints derived from DES clusters necessarily depends on having an optimized and well-calibrated algorithm for selecting clusters as well as an optical richness estimator whose mean relation and scatter compared to cluster mass are precisely known. Calibrating the galaxy cluster richness-mass relation and its scatter was the focus of the funded work. Specifically, we employ X-ray observations and optical spectroscopy with the Keck telescopes of optically-selected clusters to calibrate the relationship between optical richness (the number of galaxies in a cluster) and underlying mass. This work also probes aspects of cluster selection like the accuracy of cluster centering which are critical to weak lensing cluster studies.

  20. Graphics Processing Unit-Accelerated Code for Computing Second-Order Wiener Kernels and Spike-Triggered Covariance

    Science.gov (United States)

    Mano, Omer

    2017-01-01

    Sensory neuroscience seeks to understand and predict how sensory neurons respond to stimuli. Nonlinear components of neural responses are frequently characterized by the second-order Wiener kernel and the closely-related spike-triggered covariance (STC). Recent advances in data acquisition have made it increasingly common and computationally intensive to compute second-order Wiener kernels/STC matrices. In order to speed up this sort of analysis, we developed a graphics processing unit (GPU)-accelerated module that computes the second-order Wiener kernel of a system’s response to a stimulus. The generated kernel can be easily transformed for use in standard STC analyses. Our code speeds up such analyses by factors of over 100 relative to current methods that utilize central processing units (CPUs). It works on any modern GPU and may be integrated into many data analysis workflows. This module accelerates data analysis so that more time can be spent exploring parameter space and interpreting data. PMID:28068420

  1. Computing requirements for high energy physics experiments at the LHC collider

    CERN Document Server

    Witek, Mariusz

    2002-01-01

    In this article the requirements for the future experiments of elementary particle physics are discussed. The nature of physics phenomena expected at the LHC collider at CERN leads to an unprecedented scale of the computing infrastructure for the data storage and analysis. The possible solution is based on the distributed computing model, and is presented within the context of the global unification of the computer resources as proposed by the GRID projects. (7 refs).

  2. Conceptual design of a 1013 -W pulsed-power accelerator for megajoule-class dynamic-material-physics experiments

    Science.gov (United States)

    Stygar, W. A.; Reisman, D. B.; Stoltzfus, B. S.; Austin, K. N.; Ao, T.; Benage, J. F.; Breden, E. W.; Cooper, R. A.; Cuneo, M. E.; Davis, J.-P.; Ennis, J. B.; Gard, P. D.; Greiser, G. W.; Gruner, F. R.; Haill, T. A.; Hutsel, B. T.; Jones, P. A.; LeChien, K. R.; Leckbee, J. J.; Lewis, S. A.; Lucero, D. J.; McKee, G. R.; Moore, J. K.; Mulville, T. D.; Muron, D. J.; Root, S.; Savage, M. E.; Sceiford, M. E.; Spielman, R. B.; Waisman, E. M.; Wisher, M. L.

    2016-07-01

    We have developed a conceptual design of a next-generation pulsed-power accelerator that is optimized for megajoule-class dynamic-material-physics experiments. Sufficient electrical energy is delivered by the accelerator to a physics load to achieve—within centimeter-scale samples—material pressures as high as 1 TPa. The accelerator design is based on an architecture that is founded on three concepts: single-stage electrical-pulse compression, impedance matching, and transit-time-isolated drive circuits. The prime power source of the accelerator consists of 600 independent impedance-matched Marx generators. Each Marx comprises eight 5.8-GW bricks connected electrically in series, and generates a 100-ns 46-GW electrical-power pulse. A 450-ns-long water-insulated coaxial-transmission-line impedance transformer transports the power generated by each Marx to a system of twelve 2.5-m-radius water-insulated conical transmission lines. The conical lines are connected electrically in parallel at a 66-cm radius by a water-insulated 45-post sextuple-post-hole convolute. The convolute sums the electrical currents at the outputs of the conical lines, and delivers the combined current to a single solid-dielectric-insulated radial transmission line. The radial line in turn transmits the combined current to the load. Since much of the accelerator is water insulated, we refer to it as Neptune. Neptune is 40 m in diameter, stores 4.8 MJ of electrical energy in its Marx capacitors, and generates 28 TW of peak electrical power. Since the Marxes are transit-time isolated from each other for 900 ns, they can be triggered at different times to construct-over an interval as long as 1 μ s -the specific load-current time history required for a given experiment. Neptune delivers 1 MJ and 20 MA in a 380-ns current pulse to an 18 -m Ω load; hence Neptune is a megajoule-class 20-MA arbitrary waveform generator. Neptune will allow the international scientific community to conduct dynamic

  3. Verification, validation, and predictive capability in computational engineering and physics.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Hirsch, Charles (Vrije Universiteit Brussel, Brussels, Belgium); Trucano, Timothy Guy

    2003-02-01

    Developers of computer codes, analysts who use the codes, and decision makers who rely on the results of the analyses face a critical question: How should confidence in modeling and simulation be critically assessed? Verification and validation (V&V) of computational simulations are the primary methods for building and quantifying this confidence. Briefly, verification is the assessment of the accuracy of the solution to a computational model. Validation is the assessment of the accuracy of a computational simulation by comparison with experimental data. In verification, the relationship of the simulation to the real world is not an issue. In validation, the relationship between computation and the real world, i.e., experimental data, is the issue.

  4. Physics, Computer Science and Mathematics Division annual report, January 1--December 31, 1976

    Energy Technology Data Exchange (ETDEWEB)

    Lepore, J.V. (ed.)

    1977-01-01

    This annual report of the Physics, Computer Science and Mathematics Division describes the scientific research and other work carried out within the Division during the calendar year 1976. The Division is concerned with work in experimental and theoretical physics, with computer science and applied mathematics, and with the operation of a computer center. The major physics research activity is in high-energy physics; a vigorous program is maintained in this pioneering field. The high-energy physics research program in the Division now focuses on experiments with e/sup +/e/sup -/ colliding beams using advanced techniques and developments initiated and perfected at the Laboratory. The Division continues its work in medium energy physics, with experimental work carried out at the Bevatron and at the Los Alamos Pi-Meson Facility. Work in computer science and applied mathematics includes construction of data bases, computer graphics, computational physics and data analysis, mathematical modeling, and mathematical analysis of differential and integral equations resulting from physical problems. The computer center serves the Laboratory by constantly upgrading its facility and by providing day-to-day service. This report is descriptive in nature; references to detailed publications are given. (RWR)

  5. Mapping University Students' Epistemic Framing of Computational Physics Using Network Analysis

    Science.gov (United States)

    Bodin, Madelen

    2012-01-01

    Solving physics problem in university physics education using a computational approach requires knowledge and skills in several domains, for example, physics, mathematics, programming, and modeling. These competences are in turn related to students' beliefs about the domains as well as about learning. These knowledge and beliefs components are…

  6. PREFACE: 21st International Conference on Computing in High Energy and Nuclear Physics (CHEP2015)

    Science.gov (United States)

    Sakamoto, H.; Bonacorsi, D.; Ueda, I.; Lyon, A.

    2015-12-01

    side of the intensity frontier, 2015 is also the start of Super-KEKB commissioning. Fixed-target experiments at CERN, Fermilab and J-PARC are growing bigger in size. In the field of nuclear physics, FAIR is under construction and RHIC well engaged into its Phase-II research program facing increased datasets and new challenges with precision physics. For the future, developments are progressing towards the construction of ILC. In all these projects, computing and software will be even more important than before. Beyond those examples, non-accelerator experiments reported on their search for novel computing models as their apparatus and operation become larger and more distributed. The CHEP edition in Okinawa explored the synergy of HEP experimental physicists and computer scientists with data engineers and data scientists even further. Many area of research are covered, and the techniques developed and adopted are presented in a richness and diversity never seen before. In numbers, CHEP 2015 attracted a very high number of oral and poster contribution, 535 in total, and hosted 450 participants from 28 countries. For the first time in the conference history, a system of 'keywords' at the abstracts submission time was set up and exploited to produce conference tracks depending on the topics covered in the proposed contributions. Authors were asked to select some 'application keywords' and/or 'technology keywords' to specify the content of their contribution. A bottom-up approach that was tried at CHEP 2015 in Okinawa for the first time in the history of this conference series, this encountered vast satisfaction both in the International Advisory Committee and among the conference attendees. This process created 8 topical tracks, well balanced in content, manageable in terms of number of contributions, and able to create the adequate discussion space for trend topics (e.g. cloud computing and virtualization). CHEP 2015 hosted contributions on online computing; offline

  7. Plasma physics. Stochastic electron acceleration during spontaneous turbulent reconnection in a strong shock wave.

    Science.gov (United States)

    Matsumoto, Y; Amano, T; Kato, T N; Hoshino, M

    2015-02-27

    Explosive phenomena such as supernova remnant shocks and solar flares have demonstrated evidence for the production of relativistic particles. Interest has therefore been renewed in collisionless shock waves and magnetic reconnection as a means to achieve such energies. Although ions can be energized during such phenomena, the relativistic energy of the electrons remains a puzzle for theory. We present supercomputer simulations showing that efficient electron energization can occur during turbulent magnetic reconnection arising from a strong collisionless shock. Upstream electrons undergo first-order Fermi acceleration by colliding with reconnection jets and magnetic islands, giving rise to a nonthermal relativistic population downstream. These results shed new light on magnetic reconnection as an agent of energy dissipation and particle acceleration in strong shock waves.

  8. The LHC Magnet Programme From Accelerator Physics Requirements to Production in Industry

    CERN Document Server

    Wyss, C

    2000-01-01

    The LHC is designed to provide, at a beam energy of 7 TeV, a nominal peak luminosity of 1034 cm-2s-1 with simultaneous collisions at two high-luminosity insertions. This objective is being achieved by pushing the technology of superconducting accelerator magnets and cryogenics to its state-of-the-art limits, and by upgrading the existing CERN accelerators and infrastructures. In this paper, the parameters of the main dipole (1232 units) and quadrupole (392 units) magnets stemming from the LHC design considerations are presented and discussed. Subsequently, the R & D program undertaken at CERN and with industry, to experimentally validate magnet design assumptions, to assess the merits of design variants and to procure and commission the heavy tooling necessary for series manufacture, is described and its main difficulties and results highlighted. Finally a report is given about the procurement strategy, and the progress in manufacturing.

  9. Accelerator physics studies on the effects from an asynchronous beam dump onto the LHC experimental region collimators

    CERN Document Server

    Lari, L; Boccone, V; Bruce, R; Cerutti, F; Rossi, A; Vlachoudis, V; Mereghetti, A; Faus-Golfe, A

    2012-01-01

    Asynchronous beam aborts at the LHC are estimated to occur on average once per year. Accelerator physics studies of asynchronous dumps have been performed at different beam energies and beta-stars. The loss patterns are analyzed in order to identify the losses in particular on the Phase 1 Tertiary Collimators (TCT), since their tungsten-based active jaw insert has a lower damage threshold than the carbon-based other LHC collimators. Settings of the tilt angle of the TCTs are discussed with the aim of reducing the thermal loads on the TCT themselves.

  10. Seeing the Nature of the Accelerating Physics: It's a SNAP

    Energy Technology Data Exchange (ETDEWEB)

    Albert, J.; Aldering, G.; Allam, S.; Althouse, W.; Amanullah, R.; Annis, J.; Astier, P.; Aumeunier, M.; Bailey, S.; Baltay, C.; Barrelet, E.; Basa, S.; Bebek, C.; Bergstom, L.; Bernstein, G.; Bester, M.; Besuner, B.; Bigelow, B.; Blandford, R.; Bohlin, R.; Bonissent, A.; /Caltech /LBL, Berkeley /Fermilab /SLAC /Stockholm U. /Paris, IN2P3

    2005-08-05

    For true insight into the nature of dark energy, measurements of the precision and accuracy of the Supernova/Acceleration Probe (SNAP) are required. Precursor or scaled-down experiments are unavoidably limited, even for distinguishing the cosmological constant. They can pave the way for, but should not delay, SNAP by developing calibration, refinement, and systematics control (and they will also provide important, exciting astrophysics).

  11. Accelerating statistical image reconstruction algorithms for fan-beam x-ray CT using cloud computing

    Science.gov (United States)

    Srivastava, Somesh; Rao, A. Ravishankar; Sheinin, Vadim

    2011-03-01

    Statistical image reconstruction algorithms potentially offer many advantages to x-ray computed tomography (CT), e.g. lower radiation dose. But, their adoption in practical CT scanners requires extra computation power, which is traditionally provided by incorporating additional computing hardware (e.g. CPU-clusters, GPUs, FPGAs etc.) into a scanner. An alternative solution is to access the required computation power over the internet from a cloud computing service, which is orders-of-magnitude more cost-effective. This is because users only pay a small pay-as-you-go fee for the computation resources used (i.e. CPU time, storage etc.), and completely avoid purchase, maintenance and upgrade costs. In this paper, we investigate the benefits and shortcomings of using cloud computing for statistical image reconstruction. We parallelized the most time-consuming parts of our application, the forward and back projectors, using MapReduce, the standard parallelization library on clouds. From preliminary investigations, we found that a large speedup is possible at a very low cost. But, communication overheads inside MapReduce can limit the maximum speedup, and a better MapReduce implementation might become necessary in the future. All the experiments for this paper, including development and testing, were completed on the Amazon Elastic Compute Cloud (EC2) for less than $20.

  12. Status and perspectives of atomic physics research at GSI : The new GSI accelerator project

    NARCIS (Netherlands)

    Stolker, T; Backe, H; Beyer, HF; Brauning-Demian, A; Hagmann, S; Ionescu, DC; Jungmann, K; Kluge, HJ; Kozhuharov, C; Kuhl, T; Liesen, D; Mann, R; Mokler, PH; Quint, W; Bosch, F.M.

    2003-01-01

    A short overview on the results of atomic physics research at the storage ring ESR is given followed by a presentation of the envisioned atomic physics program at the planned new GSI facility. The proposed new GSI facility will provide highest intensities of relativistic beams of both stable and uns

  13. Status and perspectives of atomic physics research at GSI : The new GSI accelerator project

    NARCIS (Netherlands)

    Stolker, T; Backe, H; Beyer, HF; Brauning-Demian, A; Hagmann, S; Ionescu, DC; Jungmann, K; Kluge, HJ; Kozhuharov, C; Kuhl, T; Liesen, D; Mann, R; Mokler, PH; Quint, W; Bosch, F.M.

    A short overview on the results of atomic physics research at the storage ring ESR is given followed by a presentation of the envisioned atomic physics program at the planned new GSI facility. The proposed new GSI facility will provide highest intensities of relativistic beams of both stable and

  14. Study of irradiation induced restructuring of high burnup fuel - Use of computer and accelerator for fuel science and engineering -

    Energy Technology Data Exchange (ETDEWEB)

    Sataka, M.; Ishikawa, N.; Chimn, Y.; Nakamura, J.; Amaya, M. [Japan Atomic Energy Agency, Naka Gun (Japan); Iwasawa, M.; Ohnuma, T.; Sonoda, T. [Central Research Institute of Electric Power Industry, Tokyo (Japan); Kinoshita, M.; Geng, H. Y.; Chen, Y.; Kaneta, Y. [The Univ. of Tokyo, Tokyo (Japan); Yasunaga, K.; Matsumura, S.; Yasuda, K. [Kyushu Univ., Motooka (Japan); Iwase [Osaka Prefecture Univ., Osaka (Japan); Ichinomiya, T.; Nishiuran, Y. [Hokkaido Univ., Kitaku (Japan); Matzke, HJ. [Academy of Ceramics, Karlsruhe (Germany)

    2008-10-15

    In order to develop advanced fuel for future LWR reactors, trials were made to simulate the high burnup restructuring of the ceramics fuel, using accelerator irradiation out of pile and with computer simulation. The target is to reproduce the principal complex process as a whole. The reproduction of the grain subdivision (sub grain formation) was successful at experiments with sequential combined irradiation. It was made by recovery process of the accumulated dislocations, making cells and sub-boundaries at grain boundaries and pore surfaces. Details of the grain sub division mechanism is now in front of us outside of the reactor. Extensive computer science studies, first principle and molecular dynamics gave behavior of fission gas atoms and interstitial oxygen, assisting the high burnup restructuring.

  15. Computation of thermal properties via 3D homogenization of multiphase materials using FFT-based accelerated scheme

    CERN Document Server

    Lemaitre, Sophie; Choi, Daniel; Karamian, Philippe

    2015-01-01

    In this paper we study the thermal effective behaviour for 3D multiphase composite material consisting of three isotropic phases which are the matrix, the inclusions and the coating media. For this purpose we use an accelerated FFT-based scheme initially proposed in Eyre and Milton (1999) to evaluate the thermal conductivity tensor. Matrix and spherical inclusions media are polymers with similar properties whereas the coating medium is metallic hence better conducting. Thus, the contrast between the coating and the others media is very large. For our study, we use RVEs (Representative volume elements) generated by RSA (Random Sequential Adsorption) method developed in our previous works, then, we compute effective thermal properties using an FFT-based homogenization technique validated by comparison with the direct finite elements method. We study the thermal behaviour of the 3D-multiphase composite material and we show what features should be taken into account to make the computational approach efficient.

  16. Platform computing powers enterprise grid

    CERN Multimedia

    2002-01-01

    Platform Computing, today announced that the Stanford Linear Accelerator Center is using Platform LSF 5, to carry out groundbreaking research into the origins of the universe. Platform LSF 5 will deliver the mammoth computing power that SLAC's Linear Accelerator needs to process the data associated with intense high-energy physics research (1 page).

  17. "Let's get physical": advantages of a physical model over 3D computer models and textbooks in learning imaging anatomy.

    Science.gov (United States)

    Preece, Daniel; Williams, Sarah B; Lam, Richard; Weller, Renate

    2013-01-01

    Three-dimensional (3D) information plays an important part in medical and veterinary education. Appreciating complex 3D spatial relationships requires a strong foundational understanding of anatomy and mental 3D visualization skills. Novel learning resources have been introduced to anatomy training to achieve this. Objective evaluation of their comparative efficacies remains scarce in the literature. This study developed and evaluated the use of a physical model in demonstrating the complex spatial relationships of the equine foot. It was hypothesized that the newly developed physical model would be more effective for students to learn magnetic resonance imaging (MRI) anatomy of the foot than textbooks or computer-based 3D models. Third year veterinary medicine students were randomly assigned to one of three teaching aid groups (physical model; textbooks; 3D computer model). The comparative efficacies of the three teaching aids were assessed through students' abilities to identify anatomical structures on MR images. Overall mean MRI assessment scores were significantly higher in students utilizing the physical model (86.39%) compared with students using textbooks (62.61%) and the 3D computer model (63.68%) (P computer model groups (P = 0.685). Student feedback was also more positive in the physical model group compared with both the textbook and 3D computer model groups. Our results suggest that physical models may hold a significant advantage over alternative learning resources in enhancing visuospatial and 3D understanding of complex anatomical architecture, and that 3D computer models have significant limitations with regards to 3D learning.

  18. Scholarly literature and the press: scientific impact and social perception of physics computing

    CERN Document Server

    Pia, Maria Grazia; Bell, Zane W; Dressendorfer, Paul V

    2014-01-01

    The broad coverage of the search for the Higgs boson in the mainstream media is a relative novelty for high energy physics (HEP) research, whose achievements have traditionally been limited to scholarly literature. This paper illustrates the results of a scientometric analysis of HEP computing in scientific literature, institutional media and the press, and a comparative overview of similar metrics concerning representative particle physics measurements. The picture emerging from these scientometric data documents the scientific impact and social perception of HEP computing. The results of this analysis suggest that improved communication of the scientific and social role of HEP computing would be beneficial to the high energy physics community.

  19. SISTEM DETEKSI WAJAH PADA OPEN SOURCE PHYSICAL COMPUTING

    Directory of Open Access Journals (Sweden)

    Yupit Sudianto

    2014-01-01

    Full Text Available Face detection is one of the interesting research area. Majority of this research implemented on a computer. Development of face detection on a computer requires a significant investment costs. In addition to having to spend the cost of procurement of computers, is also required for operational cost such as electricity use, because the computer requires large power/watt.This research is proposed to build a face detection system using Arduino. The system will be autonomous, in other word the role of computer will be replaced by Arduino. Arduino is used is Arduino Mega 2560 with specifications microcontroller AT MEGA 2560, a speed of 16 MHz, 256 KB flash memory, 8 KB SRAM, 4 KB EEPROM. So not all face detection algorithm can be implemented on the Arduino. The limitations of memory owned by the arduino will be resolved by applying the method of template matching using the facial features in the form of a template that is shaped like a mask. Detection rate achieved in this study is 80% - 100%. Where, in the Arduino's success in identifying the face are influenced by the distance between the camera with the human face and human movement.

  20. [Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].

    Science.gov (United States)

    Furuta, Takuya; Sato, Tatsuhiko

    2015-01-01

    Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.

  1. GPU-accelerated computing for Lagrangian coherent structures of multi-body gravitational regimes

    Science.gov (United States)

    Lin, Mingpei; Xu, Ming; Fu, Xiaoyu

    2017-04-01

    Based on a well-established theoretical foundation, Lagrangian Coherent Structures (LCSs) have elicited widespread research on the intrinsic structures of dynamical systems in many fields, including the field of astrodynamics. Although the application of LCSs in dynamical problems seems straightforward theoretically, its associated computational cost is prohibitive. We propose a block decomposition algorithm developed on Compute Unified Device Architecture (CUDA) platform for the computation of the LCSs of multi-body gravitational regimes. In order to take advantage of GPU's outstanding computing properties, such as Shared Memory, Constant Memory, and Zero-Copy, the algorithm utilizes a block decomposition strategy to facilitate computation of finite-time Lyapunov exponent (FTLE) fields of arbitrary size and timespan. Simulation results demonstrate that this GPU-based algorithm can satisfy double-precision accuracy requirements and greatly decrease the time needed to calculate final results, increasing speed by approximately 13 times. Additionally, this algorithm can be generalized to various large-scale computing problems, such as particle filters, constellation design, and Monte-Carlo simulation.

  2. John Adams Lecture | Accelerator-Based Neutrino Physics: Past, Present and Future by Kenneth Long | 8 December

    CERN Multimedia

    2014-01-01

    John Adams Lecture: Accelerator-Based Neutrino Physics: Past, Present and Future by Dr. Kenneth Long (Imperial College London & STFC).   Monday, 8 December 2014 from 2 p.m. to 4 p.m. at CERN ( 503-1-001 - Council Chamber ) Abstract: The study of the neutrino is the study of physics beyond the Standard Model. We now know that the neutrinos have mass and that neutrino mixing occurs causing neutrino flavour to oscillate as neutrinos propagate through space and time. Further, some measurements can be interpreted as hints for new particles known as sterile neutrinos. The measured values of the mixing parameters make it possible that the matter-antimatter (CP) symmetry may be violated through the mixing process. The consequences of observing CP-invariance violation in neutrinos would be profound. To discover CP-invariance violation will require measurements of exquisite precision. Accelerator-based neutrino sources are central to the future programme and advances in technique are required ...

  3. Application of Computational Physics: Blood Vessel Constrictions and Medical Infuses

    CERN Document Server

    Suprijadi,; Subekti, Petrus; Viridi, Sparisoma

    2013-01-01

    Application of computation in many fields are growing fast in last two decades. Increasing on computation performance helps researchers to understand natural phenomena in many fields of science and technology including in life sciences. Computational fluid dynamic is one of numerical methods which is very popular used to describe those phenomena. In this paper we propose moving particle semi-implicit (MPS) and molecular dynamics (MD) to describe different phenomena in blood vessel. The effect of increasing the blood pressure on vessel wall will be calculate using MD methods, while the two fluid blending dynamics will be discussed using MPS. Result from the first phenomenon shows that around 80% of constriction on blood vessel make blood vessel increase and will start to leak on vessel wall, while from the second phenomenon the result shows the visualization of two fluids mixture (drugs and blood) influenced by ratio of drugs debit to blood debit. Keywords: molecular dynamic, blood vessel, fluid dynamic, movin...

  4. Accelerating science and innovation societal benefits of European research in Particle Physics

    CERN Multimedia

    Radford, Tim; Jakobsson, Camilla; Marsollier, Arnaud; Mexner, Vanessa; O'Connor, Terry

    2013-01-01

    The story so far. Collaborative research in particle physics. The lesson for Europe: co-operation pays. Medicine and life sciences. The body of knowledge: particles harnessed for health. Energy and the environment. Think big: save energy and clean up the planet. Communication and new technologies. The powerhouse of invention. Society and skills. Power to the people. The European Strategy for Particle Physics. Update 2013.

  5. Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment

    Science.gov (United States)

    Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.

    2013-12-01

    Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a

  6. Turkish students' computer self-efficacy from colleges of physical education and sports

    Directory of Open Access Journals (Sweden)

    Işıl Aktağ

    2016-03-01

    Full Text Available The purpose of this research was to determine computer self-efficacy, performance outcome, personal outcome, affective outcome and anxiety level of physical education teacher candidates. Also influence of computer usage and taking computer course on computer self-efficacy level were determined too. The subjects of this study were physical education teacher candidates from 3 universities with a total of 452. Data were collected by survey which was developed by Compeau and Higgins in 1995 and translated to Turkish by researcher. The results of the study showed that there was no significant difference between male and female students in their computer self-efficacy, performance outcome, personal outcome and affective outcome but significant difference was found in their anxiety level with female students had lower anxiety level than male students. This study showed that as the duration and frequency of computer usage increases, students' computer self-efficacy increases too.

  7. Hadron Physics at the Charm and Bottom Thresholds and Other Novel QCD Physics Topics at the NICA Accelerator Facility

    Energy Technology Data Exchange (ETDEWEB)

    Brodsky, Stanley J.; /SLAC

    2012-06-20

    The NICA collider project at the Joint Institute for Nuclear Research in Dubna will have the capability of colliding protons, polarized deuterons, and nuclei at an effective nucleon-nucleon center-of mass energy in the range {radical}s{sub NN} = 4 to 11 GeV. I briefly survey a number of novel hadron physics processes which can be investigated at the NICA collider. The topics include the formation of exotic heavy quark resonances near the charm and bottom thresholds, intrinsic strangeness, charm, and bottom phenomena, hidden-color degrees of freedom in nuclei, color transparency, single-spin asymmetries, the RHIC baryon anomaly, and non-universal antishadowing.

  8. Scratch as a Computational Modelling Tool for Teaching Physics

    Science.gov (United States)

    Lopez, Victor; Hernandez, Maria Isabel

    2015-01-01

    The Scratch online authoring tool, which features a simple programming language that has been adapted to primary and secondary students, is being used more and more in schools as it offers students and teachers the opportunity to use a tool to build scientific models and evaluate their behaviour, just as can be done with computational modelling…

  9. Scratch as a Computational Modelling Tool for Teaching Physics

    Science.gov (United States)

    Lopez, Victor; Hernandez, Maria Isabel

    2015-01-01

    The Scratch online authoring tool, which features a simple programming language that has been adapted to primary and secondary students, is being used more and more in schools as it offers students and teachers the opportunity to use a tool to build scientific models and evaluate their behaviour, just as can be done with computational modelling…

  10. Three Computer Programs for Use in Introductory Level Physics Laboratories.

    Science.gov (United States)

    Kagan, David T.

    1984-01-01

    Describes three computer programs which operate on Apple II+ microcomputers: (1) a menu-driven graph drawing program; (2) a simulation of the Millikan oil drop experiment; and (3) a program used to study the half-life of silver. (Instructions for obtaining the programs from the author are included.) (JN)

  11. From Physical to Virtual Wireless Sensor Networks using Cloud Computing

    Directory of Open Access Journals (Sweden)

    Maki Matandiko Rutakemwa

    2013-01-01

    Full Text Available In the modern world, billions of physical sensors are used for various dedications: Environment Monitoring, Healthcare, Education, Defense, Manufacturing, Smart Home, Agriculture Precision and others. Nonetheless, they are frequently utilized by their own applications and thereby snubbing the significant possibilities of sharing the resources in order to ensure the availability and performance of physical sensors. This paper assumes that the immense power of the Cloud can only be fully exploited if it is impeccably integrated into our physical lives. The principal merit of this work is a novel architecture where users can share several types of physical sensors easily and consequently many new services can be provided via a virtualized structure that allows allocation of sensor resources to different users and applications under flexible usage scenarios within which users can easily collect, access, process, visualize, archive, share and search large amounts of sensor data from different applications. Moreover, an implementation has been achieved using Arduino-Atmega328 as hardware platform and Eucalyptus/Open Stack with Orchestra-Juju for Private Sensor Cloud. Then this private Cloud has been connected to some famous public clouds such as Amazon EC2, ThingSpeak, SensorCloud and Pachube. The testing was successful at 80%. The recommendation for future work would be to improve the effectiveness of virtual sensors by applying optimization techniques and other methods.

  12. THE EMPLOYMENT OF COMPUTER TECHNOLOGIES IN LABORATORY COURSE ON PHYSICS

    Directory of Open Access Journals (Sweden)

    Liudmyla M. Nakonechna

    2010-08-01

    Full Text Available Present paper considers the questions on development of conceptually new virtual physical laboratory, the employment of which into secondary education schools will allow to check the theoretical knowledge of students before laboratory work and to acquire the modern methods and skills of experiment.

  13. Third order TRANSPORT with MAD (Methodical Accelerator Design) input

    Energy Technology Data Exchange (ETDEWEB)

    Carey, D.C.

    1988-09-20

    This paper describes computer-aided design codes for particle accelerators. Among the topics discussed are: input beam description; parameters and algebraic expressions; the physical elements; beam lines; operations; and third-order transfer matrix. (LSP)

  14. Accelerating the discovery of space-time patterns of infectious diseases using parallel computing.

    Science.gov (United States)

    Hohl, Alexander; Delmelle, Eric; Tang, Wenwu; Casas, Irene

    2016-11-01

    Infectious diseases have complex transmission cycles, and effective public health responses require the ability to monitor outbreaks in a timely manner. Space-time statistics facilitate the discovery of disease dynamics including rate of spread and seasonal cyclic patterns, but are computationally demanding, especially for datasets of increasing size, diversity and availability. High-performance computing reduces the effort required to identify these patterns, however heterogeneity in the data must be accounted for. We develop an adaptive space-time domain decomposition approach for parallel computation of the space-time kernel density. We apply our methodology to individual reported dengue cases from 2010 to 2011 in the city of Cali, Colombia. The parallel implementation reaches significant speedup compared to sequential counterparts. Density values are visualized in an interactive 3D environment, which facilitates the identification and communication of uneven space-time distribution of disease events. Our framework has the potential to enhance the timely monitoring of infectious diseases.

  15. Accelerating selected columns of the density matrix computations via approximate column selection

    CERN Document Server

    Damle, Anil; Ying, Lexing

    2016-01-01

    Localized representation of the Kohn-Sham subspace plays an important role in quantum chemistry and materials science. The recently developed selected columns of the density matrix (SCDM) method [J. Chem. Theory Comput. 11, 1463, 2015] is a simple and robust procedure for finding a localized representation of a set of Kohn-Sham orbitals from an insulating system. The SCDM method allows the direct construction of a well conditioned (or even orthonormal) and localized basis for the Kohn-Sham subspace. The SCDM procedure avoids the use of an optimization procedure and does not depend on any adjustable parameters. The most computationally expensive step of the SCDM method is a column pivoted QR factorization that identifies the important columns for constructing the localized basis set. In this paper, we develop a two stage approximate column selection strategy to find the important columns at much lower computational cost. We demonstrate the effectiveness of this process using a dissociation process of a BH$_{3}...

  16. Acceleration of FEM-based transfer matrix computation for forward and inverse problems of electrocardiography.

    Science.gov (United States)

    Farina, Dmytro; Jiang, Y; Dössel, O

    2009-12-01

    The distributions of transmembrane voltage (TMV) within the cardiac tissue are linearly connected with the patient's body surface potential maps (BSPMs) at every time instant. The matrix describing the relation between the respective distributions is referred to as the transfer matrix. This matrix can be employed to carry out forward calculations in order to find the BSPM for any given distribution of TMV inside the heart. Its inverse can be used to reconstruct the cardiac activity non-invasively, which can be an important diagnostic tool in the clinical practice. The computation of this matrix using the finite element method can be quite time-consuming. In this work, a method is proposed allowing to speed up this process by computing an approximate transfer matrix instead of the precise one. The method is tested on three realistic anatomical models of real-world patients. It is shown that the computation time can be reduced by 50% without loss of accuracy.

  17. Computer-assisted optics teaching at the Moscow Institute of Physics and Technology

    Science.gov (United States)

    Soboleva, Natalia N.; Kozel, Stanislav M.; Lockshin, Gennady R.; Entin, M. A.; Galichsky, K. V.; Lebedinsky, P. L.; Zhdanovich, P. M.

    1995-10-01

    Traditional methods used in optics teaching lack clarity and vividness when illustrating abstract notions such as polarization or interference. Here's where computer models may help, but they usually show only a single phenomenon or process and don't let the student see the entire picture. For this reason at Moscow Institute of Physics and Technology was developed the courseware 'Wave Optics on the Computer' consisting of a number of related simulations. It is intended for students studying optics at the Universities. Recently we have developed different simulations in optics for secondary school level. They are included as part of large computer courseware 'Physics by Pictures'. The courseware 'Wave Optics on the Computer' consists of nine large simulation programs and the textbook. The programs are simulating basic phenomena of wave optics. parameters of optical systems can be varied by the user. The textbook contains theoretical considerations on studied optical phenomena, recommendations concerning work with computer programs, and, especially for those wishing to deeper understand wave optics, original problems for individual solution. At the Moscow Institute of Physics and Technology the course 'Wave Optics on the Computer' is used for teaching optics in the course of general physics. The course provides both the computer assisted teaching for lectures support and computer assisted learning for students during seminars in the computer classroom.

  18. PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)

    Science.gov (United States)

    Vincenti, Henri

    2016-03-01

    The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.

  19. System aspects of small computers in particle physics a personal view

    CERN Document Server

    Zacharov, B

    1972-01-01

    A general review of those areas where small computers are used in the whole field of elementary particle physics is presented. Detailed considerations are made of some particular aspects, mainly interfacing of equipment and communications links. (7 refs).

  20. Physics, Computer Science and Mathematics Division. Annual report, 1 January-31 December 1979

    Energy Technology Data Exchange (ETDEWEB)

    Lepore, J.V. (ed.)

    1980-09-01

    This annual report describes the research work carried out by the Physics, Computer Science and Mathematics Division during 1979. The major research effort of the Division remained High Energy Particle Physics with emphasis on preparing for experiments to be carried out at PEP. The largest effort in this field was for development and construction of the Time Projection Chamber, a powerful new particle detector. This work took a large fraction of the effort of the physics staff of the Division together with the equivalent of more than a hundred staff members in the Engineering Departments and shops. Research in the Computer Science and Mathematics Department of the Division (CSAM) has been rapidly expanding during the last few years. Cross fertilization of ideas and talents resulting from the diversity of effort in the Physics, Computer Science and Mathematics Division contributed to the software design for the Time Projection Chamber, made by the Computer Science and Applied Mathematics Department.