WorldWideScience

Sample records for accelerated strategic computing

  1. Delivering Insight The History of the Accelerated Strategic Computing Initiative

    Larzelere II, A R

    2007-01-03

    The history of the Accelerated Strategic Computing Initiative (ASCI) tells of the development of computational simulation into a third fundamental piece of the scientific method, on a par with theory and experiment. ASCI did not invent the idea, nor was it alone in bringing it to fruition. But ASCI provided the wherewithal - hardware, software, environment, funding, and, most of all, the urgency - that made it happen. On October 1, 2005, the Initiative completed its tenth year of funding. The advances made by ASCI over its first decade are truly incredible. Lawrence Livermore, Los Alamos, and Sandia National Laboratories, along with leadership provided by the Department of Energy's Defense Programs Headquarters, fundamentally changed computational simulation and how it is used to enable scientific insight. To do this, astounding advances were made in simulation applications, computing platforms, and user environments. ASCI dramatically changed existing - and forged new - relationships, both among the Laboratories and with outside partners. By its tenth anniversary, despite daunting challenges, ASCI had accomplished all of the major goals set at its beginning. The history of ASCI is about the vision, leadership, endurance, and partnerships that made these advances possible.

  2. Accelerated Strategic Computing Initiative (ASCI) Program Plan [FY2000

    None

    2000-01-01

    In August 1995, the United States took a significant step to reduce the nuclear danger. The decision to pursue a zero- yield Comprehensive Test Ban Treaty will allow greater control over the proliferation of nuclear weapons and will halt the growth of new nuclear systems. This step is only possible because of the Stockpile Stewardship Program, which provides an alternative means of ensuring the safety, performance, and reliability of the United States' enduring stockpile. At the heart of the Stockpile Stewardship Program is ASCI, which will create the high-confidence simulation capabilities needed to integrate fundamental science, experiments, and archival data into the stewardship of the actual weapons in the stockpile. ASCI will also serve to drive the development of simulation as a national resource by working closely with the computer industry and with universities.

  3. Accelerating Strategic Change Through Action Learning

    Younger, Jon; Sørensen, René; Cleemann, Christine;

    2013-01-01

    Purpose – The purpose of this paper is to describe how a leading global company used action-learning based leadership development to accelerate strategic culture change. Design/methodology/approach – It describes the need for change, and the methodology and approach by which the initiative, Impac...

  4. Computational Biology: A Strategic Initiative LDRD

    Barksy, D; Colvin, M

    2002-02-07

    The goal of this Strategic Initiative LDRD project was to establish at LLNL a new core capability in computational biology, combining laboratory strengths in high performance computing, molecular biology, and computational chemistry and physics. As described in this report, this project has been very successful in achieving this goal. This success is demonstrated by the large number of referred publications, invited talks, and follow-on research grants that have resulted from this project. Additionally, this project has helped build connections to internal and external collaborators and funding agencies that will be critical to the long-term vitality of LLNL programs in computational biology. Most importantly, this project has helped establish on-going research groups in the Biology and Biotechnology Research Program, the Physics and Applied Technology Directorate, and the Computation Directorate. These groups include three laboratory staff members originally hired as post-doctoral researchers for this strategic initiative.

  5. Applications of the Strategic Defense Initiative's compact accelerators

    Montanarelli, Nick; Lynch, Ted

    1991-01-01

    The Strategic Defense Initiative's (SDI) investment in particle accelerator technology for its directed energy weapons program has produced breakthroughs in the size and power of new accelerators. These accelerators, in turn, have produced spinoffs in several areas: the radio frequency quadrupole linear accelerator (RFQ linac) was recently incorporated into the design of a cancer therapy unit at the Loma Linda University Medical Center, an SDI-sponsored compact induction linear accelerator may replace Cobalt-60 radiation and hazardous ethylene-oxide as a method for sterilizing medical products, and other SDIO-funded accelerators may be used to produce the radioactive isotopes oxygen-15, nitrogen-13, carbon-11, and fluorine-18 for positron emission tomography (PET). Other applications of these accelerators include bomb detection, non-destructive inspection, decomposing toxic substances in contaminated ground water, and eliminating nuclear waste.

  6. Applications of the Strategic Defense Initiative's compact accelerators

    Montanarelli, Nick; Lynch, Ted

    1991-12-01

    The Strategic Defense Initiative's (SDI) investment in particle accelerator technology for its directed energy weapons program has produced breakthroughs in the size and power of new accelerators. These accelerators, in turn, have produced spinoffs in several areas: the radio frequency quadrupole linear accelerator (RFQ linac) was recently incorporated into the design of a cancer therapy unit at the Loma Linda University Medical Center, an SDI-sponsored compact induction linear accelerator may replace Cobalt-60 radiation and hazardous ethylene-oxide as a method for sterilizing medical products, and other SDIO-funded accelerators may be used to produce the radioactive isotopes oxygen-15, nitrogen-13, carbon-11, and fluorine-18 for positron emission tomography (PET). Other applications of these accelerators include bomb detection, non-destructive inspection, decomposing toxic substances in contaminated ground water, and eliminating nuclear waste.

  7. Computer codes in accelerator domain

    In this report a list of computer codes for calculations in accelerator physics is presented. The codes concern the design of accelerator shieldings, beam dynamics of synchrotrons and storage rings, the simulation of radiation fields in accelerators, the design of RF cavities, beam dynamics of microtrons, the optics of charged-particle beams, the design of accelerator components, the calculation of magnetic fields, the computation of thermal and mechanical processes in accelerator structures, the design of magnets, and the optimization of beam lines. Most of the codes are written in FORTRAN. (HSI) nge of computational results and pieces of software via E-mail. Also outstanding is the problem of a more efficient application of the known and tested forms of communication, e.g. selection and systematization of the data on the available program packages, Workshops of the interested users and unification of experts into working groups. (orig.)

  8. Personal computers in accelerator control

    Anderssen, P. S.

    1988-07-01

    The advent of the personal computer has created a popular movement which has also made a strong impact on science and engineering. Flexible software environments combined with good computational performance and large storage capacities are becoming available at steadily decreasing costs. Of equal importance, however, is the quality of the user interface offered on many of these products. Graphics and screen interaction is available in ways that were only possible on specialized systems before. Accelerator engineers were quick to pick up the new technology. The first applications were probably for controllers and data gatherers for beam measurement equipment. Others followed, and today it is conceivable to make personal computer a standard component of an accelerator control system. This paper reviews the experience gained at CERN so far and describes the approach taken in the design of the common control center for the SPS and the future LEP accelerators. The design goal has been to be able to integrate personal computers into the accelerator control system and to build the operator's workplace around it.

  9. Accelerating Clean Energy Commercialization. A Strategic Partnership Approach

    Adams, Richard [National Renewable Energy Lab. (NREL), Golden, CO (United States); Pless, Jacquelyn [Joint Institute for Strategic Energy Analysis, Golden, CO (United States); Arent, Douglas J. [Joint Institute for Strategic Energy Analysis, Golden, CO (United States); Locklin, Ken [Impax Asset Management Group (United Kingdom)

    2016-04-01

    Technology development in the clean energy and broader clean tech space has proven to be challenging. Long-standing methods for advancing clean energy technologies from science to commercialization are best known for relatively slow, linear progression through research and development, demonstration, and deployment (RDD&D); and characterized by well-known valleys of death for financing. Investment returns expected by traditional venture capital investors have been difficult to achieve, particularly for hardware-centric innovations, and companies that are subject to project finance risks. Commercialization support from incubators and accelerators has helped address these challenges by offering more support services to start-ups; however, more effort is needed to fulfill the desired clean energy future. The emergence of new strategic investors and partners in recent years has opened up innovative opportunities for clean tech entrepreneurs, and novel commercialization models are emerging that involve new alliances among clean energy companies, RDD&D, support systems, and strategic customers. For instance, Wells Fargo and Company (WFC) and the National Renewable Energy Laboratory (NREL) have launched a new technology incubator that supports faster commercialization through a focus on technology development. The incubator combines strategic financing, technology and technical assistance, strategic customer site validation, and ongoing financial support.

  10. Computer programs in accelerator physics

    Three areas of accelerator physics are discussed in which computer programs have been applied with much success: i) single-particle beam dynamics in circular machines, i.e. the design and matching of machine lattices; ii) computations of electromagnetic fields in RF cavities and similar objects, useful for the design of RF cavities and for the calculation of wake fields; iii) simulation of betatron and synchrotron oscillations in a machine with non-linear elements, e.g. sextupoles, and of bunch lengthening due to longitudinal wake fields. (orig.)

  11. Accelerating Scientific Computations using FPGAs

    Pell, O.; Atasu, K.; Mencer, O.

    Field Programmable Gate Arrays (FPGAs) are semiconductor devices that contain a grid of programmable cells, which the user configures to implement any digital circuit of up to a few million gates. Modern FPGAs allow the user to reconfigure these circuits many times each second, making FPGAs fully programmable and general purpose. Recent FPGA technology provides sufficient resources to tackle scientific applications on large-scale parallel systems. As a case study, we implement the Fast Fourier Transform [1] in a flexible floating point implementation. We utilize A Stream Compiler [2] (ASC) which combines C++ syntax with flexible floating point support by providing a 'HWfloat' data-type. The resulting FFT can be targeted to a variety of FPGA platforms in FFTW-style, though not yet completely automatically. The resulting FFT circuit can be adapted to the particular resources available on the system. The optimal implementation of an FFT accelerator depends on the length and dimensionality of the FFT, the available FPGA area, the available hard DSP blocks, the FPGA board architecture, and the precision and range of the application [3]. Software-style object-orientated abstractions allow us to pursue an accelerated pace of development by maximizing re-use of design patterns. ASC allows a few core hardware descriptions to generate hundreds of different circuit variants to meet particular speed, area and precision goals. The key to achieving maximum acceleration of FFT computation is to match memory and compute bandwidths so that maximum use is made of computational resources. Modern FPGAs contain up to hundreds of independent SRAM banks to store intermediate results, providing ample scope for optimizing memory parallelism. At 175Mhz, one of Maxeler's Radix-4 FFT cores computes 4x as many 1024pt FFTs per second as a dual Pentium-IV Xeon machine running FFTW. Eight such parallel cores fit onto the largest FPGA in the Xilinx Virtex-4 family, providing a 32x speed-up over

  12. Cloud computing strategic framework (FY13 - FY15).

    Arellano, Lawrence R.; Arroyo, Steven C.; Giese, Gerald J.; Cox, Philip M.; Rogers, G. Kelly

    2012-11-01

    This document presents an architectural framework (plan) and roadmap for the implementation of a robust Cloud Computing capability at Sandia National Laboratories. It is intended to be a living document and serve as the basis for detailed implementation plans, project proposals and strategic investment requests.

  13. Accelerating Climate Simulations Through Hybrid Computing

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  14. Computing tools for accelerator design calculations

    Fischler, M.; Nash, T.

    1984-01-01

    This note is intended as a brief, summary guide for accelerator designers to the new generation of commercial and special processors that allow great increases in computing cost effectiveness. New thinking is required to take best advantage of these computing opportunities, in particular, when moving from analytical approaches to tracking simulations. In this paper, we outline the relevant considerations.

  15. FPGA-accelerated simulation of computer systems

    Angepat, Hari; Chung, Eric S; Hoe, James C; Chung, Eric S

    2014-01-01

    To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed f

  16. Accelerating artificial intelligence with reconfigurable computing

    Cieszewski, Radoslaw

    Reconfigurable computing is emerging as an important area of research in computer architectures and software systems. Many algorithms can be greatly accelerated by placing the computationally intense portions of an algorithm into reconfigurable hardware. Reconfigurable computing combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be changed over the lifetime of the system. Similar to an ASIC, reconfigurable systems provide a method to map circuits into hardware. Reconfigurable systems therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Such a field, where there is many different algorithms which can be accelerated, is an artificial intelligence. This paper presents example hardware implementations of Artificial Neural Networks, Genetic Algorithms and Expert Systems.

  17. Computer networks in future accelerator control systems

    Some findings of a study concerning a computer based control and monitoring system for the proposed ISABELLE Intersecting Storage Accelerator are presented. Requirements for development and implementation of such a system are discussed. An architecture is proposed where the system components are partitioned along functional lines. Implementation of some conceptually significant components is reviewed

  18. The strategic planning initiative for accelerated cleanup of Rocky Flats

    The difficulties associated with the congressional funding cycles, regulatory redirection, remediation schedule deadlines, and the lack of a mixed waste (MW) repository have adversely impacted the environmental restoration (ER) program across the entire U.S. Department of Energy (DOE) complex including Rocky Flats Plant (RFP). In an effort to counteract and reduce the impacts of these difficulties, RFP management saw the need for developing a revised ER Program. The objective of the revised ER approach is to identify an initiative that would accelerate the cleanup process and reduce costs without compromising either protection of human health or the environment. A special analysis with that assigned objective was initiated in June 1993 using a team that included DOE Headquarters and Rocky Flats Field Office (RFFO), EG ampersand G personnel, and experts from nationally recognized ER firms. The analysis relied on recent regulatory and process innovations such as DOE's Streamlined Approach for Environmental Restoration (SAFER) and EPA's Superfund Accelerated Cleanup Model (SACM) and Corrective Action Management Units (CAMU). The analysis also incorporated other ongoing improvements efforts initiated by RFP, such as the Quality Action Team and the Integrated Planning Process

  19. Strategic Plan for a Scientific Cloud Computing infrastructure for Europe

    Lengert, Maryline

    2011-01-01

    Here we present the vision, concept and direction for forming a European Industrial Strategy for a Scientific Cloud Computing Infrastructure to be implemented by 2020. This will be the framework for decisions and for securing support and approval in establishing, initially, an R&D European Cloud Computing Infrastructure that serves the need of European Research Area (ERA ) and Space Agencies. This Cloud Infrastructure will have the potential beyond this initial user base to evolve to provide similar services to a broad range of customers including government and SMEs. We explain how this plan aims to support the broader strategic goals of our organisations and identify the benefits to be realised by adopting an industrial Cloud Computing model. We also outline the prerequisites and commitment needed to achieve these objectives.

  20. Present SLAC accelerator computer control system features

    The current functional organization and state of software development of the computer control system of the Stanford Linear Accelerator is described. Included is a discussion of the distribution of functions throughout the system, the local controller features, and currently implemented features of the touch panel portion of the system. The functional use of our triplex of PDP11-34 computers sharing common memory is described. Also included is a description of the use of pseudopanel tables as data tables for closed loop control functions

  1. Detonation Type Ram Accelerator: A Computational Investigation

    Sunil Bhat

    2000-01-01

    Full Text Available An analytical model explaining the functional characteristics of detonation type ram accelerator is presented. Major flow processes, namely, (i supersonic flow over the cone of the projectile, (ii initiation ofconical shock wave and its reflection from the tube wall, (iii supersonic combustion, and (iv expansion wave and its reflection are modelled. Taylor-Maccoll approach is adopted for modellingthe flow over the cone of the projectile. Shock reflection is treated in accordance with wave angle theorytor flows over the wedge. Prandtl-Mayer analysis is used to model the expansion wave and its reflection.Steady one-dimensional flow with heat transfer along with Rayleigh line equation for perfect gases isused to model supersonic combustion. A computer code is developed to compute the thrust producedby combustion of gases. Ballistic parameters like thrust-pressure ratio and ballistic efficiency of the accelerator are evaluated and their maximum values are 0.032 and 0.068, respectively. The code indicates possibility ofachieving high velocity of 7 km/s on utilising gaseous mixture of 2H2+O2 in the operation.Velocity range suitable for operation of the accelerator lies between 3.8 - 7.0 km/s. Maximum thrust valueis 33721 N which corresponds to the projectile velocity of 5 km/s.

  2. Strategic Cognitive Sequencing: A Computational Cognitive Neuroscience Approach

    Seth A. Herd

    2013-01-01

    Full Text Available We address strategic cognitive sequencing, the “outer loop” of human cognition: how the brain decides what cognitive process to apply at a given moment to solve complex, multistep cognitive tasks. We argue that this topic has been neglected relative to its importance for systematic reasons but that recent work on how individual brain systems accomplish their computations has set the stage for productively addressing how brain regions coordinate over time to accomplish our most impressive thinking. We present four preliminary neural network models. The first addresses how the prefrontal cortex (PFC and basal ganglia (BG cooperate to perform trial-and-error learning of short sequences; the next, how several areas of PFC learn to make predictions of likely reward, and how this contributes to the BG making decisions at the level of strategies. The third models address how PFC, BG, parietal cortex, and hippocampus can work together to memorize sequences of cognitive actions from instruction (or “self-instruction”. The last shows how a constraint satisfaction process can find useful plans. The PFC maintains current and goal states and associates from both of these to find a “bridging” state, an abstract plan. We discuss how these processes could work together to produce strategic cognitive sequencing and discuss future directions in this area.

  3. Symbolic mathematical computing: orbital dynamics and application to accelerators

    Computer-assisted symbolic mathematical computation has become increasingly useful in applied mathematics. A brief introduction to such capabilitites and some examples related to orbital dynamics and accelerator physics are presented. (author)

  4. Quality Function Deployment (QFD House of Quality for Strategic Planning of Computer Security of SMEs

    Jorge A. Ruiz-Vanoye

    2013-01-01

    Full Text Available This article proposes to implement the Quality Function Deployment (QFD House of Quality for strategic planning of computer security for Small and Medium Enterprises (SME. The House of Quality (HoQ applied to computer security of SME is a framework to convert the security needs of corporate computing in a set of specifications to improve computer security.

  5. Advanced Computing Tools and Models for Accelerator Physics

    Ryne, Robert; Ryne, Robert D.

    2008-06-11

    This paper is based on a transcript of my EPAC'08 presentation on advanced computing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics.

  6. Strategic Planning for the Computer Security: A Practice Case of an Electrical Research Institute

    Jorge A. Ruiz-Vanoye; Ocotlan Diaz-Parra; Ana Canepa Saénz; Barrera-Cámara, Ricardo A.; Alejandro Fuentes-Penna; Beatriz Bernabe-Loranca

    2014-01-01

    We show a practice case of strategic planning for the computer science security based on the concepts of strategic administration of enterprise politics. The practice case of the computer science security shows information about an Electric Research Institute of Mexican Government. The Electric Research Institute is a public enterprise dedicated to innovation, technological development and applied scientific research, in order to develop technologies applicable to the electrical and oil indus...

  7. Community Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    Spentzouris, P.; /Fermilab; Cary, J.; /Tech-X, Boulder; McInnes, L.C.; /Argonne; Mori, W.; /UCLA; Ng, C.; /SLAC; Ng, E.; Ryne, R.; /LBL, Berkeley

    2011-11-14

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization

  8. Computational studies and optimization of wakefield accelerators

    Laser- and particle beam-driven plasma wakefield accelerators produce accelerating fields thousands of times higher than radio-frequency accelerators, offering compactness and ultrafast bunches to extend the frontiers of high energy physics and to enable laboratory-scale radiation sources. Large-scale kinetic simulations provide essential understanding of accelerator physics to advance beam performance and stability and show and predict the physics behind recent demonstration of narrow energy spread bunches. Benchmarking between codes is establishing validity of the models used and, by testing new reduced models, is extending the reach of simulations to cover upcoming meter-scale multi-GeV experiments. This includes new models that exploit Lorentz boosted simulation frames to speed calculations. Simulations of experiments showed that recently demonstrated plasma gradient injection of electrons can be used as an injector to increase beam quality by orders of magnitude. Simulations are now also modeling accelerator stages of tens of GeV, staging of modules, and new positron sources to design next-generation experiments and to use in applications in high energy physics and light sources

  9. Berkeley Lab Computing Sciences: Accelerating Scientific Discovery

    Hules, John A

    2009-01-01

    Scientists today rely on advances in computer science, mathematics, and computational science, as well as large-scale computing and networking facilities, to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab's Computing Sciences organization researches, develops, and deploys new tools and technologies to meet these needs and to advance research in such areas as global climate change, combustion, fusion energy, nanotechnology, biology, and astrophysics.

  10. Berkeley Lab Computing Sciences: Accelerating Scientific Discovery

    Scientists today rely on advances in computer science, mathematics, and computational science, as well as large-scale computing and networking facilities, to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab's Computing Sciences organization researches, develops, and deploys new tools and technologies to meet these needs and to advance research in such areas as global climate change, combustion, fusion energy, nanotechnology, biology, and astrophysics

  11. Software Accelerates Computing Time for Complex Math

    2014-01-01

    Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.

  12. Scientific computing with multicore and accelerators

    Kurzak, Jakub; Dongarra, Jack

    2010-01-01

    Dense Linear Algebra Implementing Matrix Multiplication on the Cell B.E, Wesley Alvaro, Jakub Kurzak, and Jack DongarraImplementing Matrix Factorizations on the Cell BE, Jakub Kurzak and Jack DongarraDense Linear Algebra for Hybrid GPU-Based Systems, Stanimire Tomov and Jack DongarraBLAS for GPUs, Rajib Nath, Stanimire Tomov, and Jack DongarraSparse Linear Algebra Sparse Matrix-Vector Multiplication on Multicore and Accelerators, Samuel Williams, Nathan B

  13. Accelerating Iterative Big Data Computing Through MPI

    梁帆; 鲁小亿

    2015-01-01

    Current popular systems, Hadoop and Spark, cannot achieve satisfied performance because of the inefficient overlapping of computation and communication when running iterative big data applications. The pipeline of computing, data movement, and data management plays a key role for current distributed data computing systems. In this paper, we first analyze the overhead of shuffle operation in Hadoop and Spark when running PageRank workload, and then propose an event-driven pipeline and in-memory shuffle design with better overlapping of computation and communication as DataMPI-Iteration, an MPI-based library, for iterative big data computing. Our performance evaluation shows DataMPI-Iteration can achieve 9X∼21X speedup over Apache Hadoop, and 2X∼3X speedup over Apache Spark for PageRank and K-means.

  14. Community petascale project for accelerator science and simulation: advancing computational science for future accelerators and accelerator technologies

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R and D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors

  15. Commnity Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    Spentzouris, Panagiotis; /Fermilab; Cary, John; /Tech-X, Boulder; Mcinnes, Lois Curfman; /Argonne; Mori, Warren; /UCLA; Ng, Cho; /SLAC; Ng, Esmond; Ryne, Robert; /LBL, Berkeley

    2008-07-01

    The design and performance optimization of particle accelerators is essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC1 Accelerator Science and Technology project, the SciDAC2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modeling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multi-physics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  16. Commnity Petascale Project for Accelerator Science And Simulation: Advancing Computational Science for Future Accelerators And Accelerator Technologies

    Spentzouris, Panagiotis; /Fermilab; Cary, John; /Tech-X, Boulder; Mcinnes, Lois Curfman; /Argonne; Mori, Warren; /UCLA; Ng, Cho; /SLAC; Ng, Esmond; Ryne, Robert; /LBL, Berkeley

    2011-10-21

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  17. Computer codes used in particle accelerator design: First edition

    This paper contains a listing of more than 150 programs that have been used in the design and analysis of accelerators. Given on each citation are person to contact, classification of the computer code, publications describing the code, computer and language runned on, and a short description of the code. Codes are indexed by subject, person to contact, and code acronym

  18. (U) Computation acceleration using dynamic memory

    Hakel, Peter [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-10-24

    Many computational applications require the repeated use of quantities, whose calculations can be expensive. In order to speed up the overall execution of the program, it is often advantageous to replace computation with extra memory usage. In this approach, computed values are stored and then, when they are needed again, they are quickly retrieved from memory rather than being calculated again at great cost. Sometimes, however, the precise amount of memory needed to store such a collection is not known in advance, and only emerges in the course of running the calculation. One problem accompanying such a situation is wasted memory space in overdimensioned (and possibly sparse) arrays. Another issue is the overhead of copying existing values to a new, larger memory space, if the original allocation turns out to be insufficient. In order to handle these runtime problems, the programmer therefore has the extra task of addressing them in the code.

  19. Collaborative strategic board games as a site for distributed computational thinking

    Berland, Matthew; Lee, Victor R.

    2011-01-01

    This paper examines the idea that contemporary strategic board games represent an informal, interactional context in which complex computational thinking takes place. When games are collaborative – that is, a game requires that players work in joint pursuit of a shared goal – the computational thinking is easily observed as distributed across several participants. This raises the possibility that a focus on such board games are profitable for those who wish to understand computational thinkin...

  20. GPU-accelerated micromagnetic simulations using cloud computing

    Jermain, C. L.; Rowlands, G. E.; Buhrman, R. A.; Ralph, D. C.

    2016-03-01

    Highly parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics.

  1. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators

    Yim, Keun Soo

    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of

  2. Computer-based training for particle accelerator personnel

    A continuing problem at many laboratories is the training of new operators in the arcane technology of particle accelerators. Presently most of this training occurs on the job, under a mentor. Such training is expensive, and while it provides operational experience, it is frequently lax in providing the physics background needed to truly understand accelerator systems. Using computers in a self-paced, interactive environment can be more effective in meeting this training need. copyright 1999 American Institute of Physics

  3. Computer-based training for particle accelerator personnel

    A continuing problem at many laboratories is the training of new operators in the arcane technology of particle accelerators. Presently most of this training occurs ''on the job,'' under a mentor. Such training is expensive, and while it provides operational experience, it is frequently lax in providing the physics background needed to truly understand accelerator systems. Using computers in a self-paced, interactive environment can be more effective in meeting this training need

  4. Distributed computer controls for accelerator systems

    A distributed control system has been designed and installed at the Lawrence Livermore National Laboratory Multi-user Tandem Facility using an extremely modular approach in hardware and software. The two tiered, geographically organized design allowed total system implementation with four months with a computer and instrumentation cost of approximately $100K. Since the system structure is modular, application to a variety of facilities is possible. Such a system allows rethinking and operational style of the facilities, making possible highly reproducible and unattended operation. The impact of industry standards, i.e., UNIX, CAMAC, and IEEE-802.3, and the use of a graphics-oriented controls software suite allowed the efficient implementation of the system. The definition, design, implementation, operation and total system performance will be discussed. 3 refs

  5. Strategic Analysis of Autodesk and the Move to Cloud Computing

    Kewley, Kathleen

    2012-01-01

    This paper provides an analysis of the opportunity for Autodesk to move its core technology to a cloud delivery model. Cloud computing offers clients a number of advantages, such as lower costs for computer hardware, increased access to technology and greater flexibility. With the IT industry embracing this transition, software companies need to plan for future change and lead with innovative solutions. Autodesk is in a unique position to capitalize on this market shift, as it is the leader i...

  6. The impact of new computer technology on accelerator control

    This paper describes some recent developments in computing and stresses their application in accelerator control systems. Among the advances that promise to have a significant impact are (1) low cost scientific workstations; (2) the use of ''windows'', pointing devices and menus in a multi-tasking operating system; (3) high resolution large-screen graphics monitors; (4) new kinds of high bandwidth local area networks. The relevant features are related to a general accelerator control system. For example, this paper examines the implications of a computing environment which permits and encourages graphical manipulation of system components, rather than traditional access through the writing of programs or ''canned'' access via touch panels

  7. A Quantitative Study of the Relationship between Leadership Practice and Strategic Intentions to Use Cloud Computing

    Castillo, Alan F.

    2014-01-01

    The purpose of this quantitative correlational cross-sectional research study was to examine a theoretical model consisting of leadership practice, attitudes of business process outsourcing, and strategic intentions of leaders to use cloud computing and to examine the relationships between each of the variables respectively. This study…

  8. Construction of Linux computer system for acceleration equipment

    In November of 2007, we replaced computer servers from HP-UX system to Linux-PC to obtain better CPU and graphic performance. In SPring-8, accelerator is operated with many GUI program applications on eighteen computer servers. The computer server require high graphic performance to monitoring some GUI programs at the same time and display a program forms of long square related linac. In this paper, the replacement of computer system and the problem attendant to it about both of softwares and hardwares is discribed. (author)

  9. Computer simulations of compact toroid formation and acceleration

    Experiments to form, accelerate, and focus compact toroid plasmas will be performed on the 9.4 MJ SHIVA STAR fast capacitor bank at the Air Force Weapons Laboratory during the 1990. The MARAUDER (magnetically accelerated rings to achieve ultrahigh directed energy and radiation) program is a research effort to accelerate magnetized plasma rings with the masses between 0.1 and 1.0 mg to velocities above 10 8 cm/sec and energies above 1 MJ. Research on these high-velocity compact toroids may lead to development of very fast opening switches, high-power microwave sources, and an alternative path to inertial confinement fusion. Design of a compact toroid accelerator experiment on the SHIVA STAR capacitor bank is underway, and computer simulations with the 2 1/2-dimensional magnetohydrodynamics code, MACH2, have been performed to guide this endeavor. The compact toroids are produced in a magnetized coaxial plasma gun, and the acceleration will occur in a configuration similar to a coaxial railgun. Detailed calculations of formation and equilibration of a low beta magnetic force-free configuration (curl B = kB) have been performed with MACH2. In this paper, the authors discuss computer simulations of the focusing and acceleration of the toroid

  10. Neural computation and particle accelerators research, technology and applications

    D'Arras, Horace

    2010-01-01

    This book discusses neural computation, a network or circuit of biological neurons and relatedly, particle accelerators, a scientific instrument which accelerates charged particles such as protons, electrons and deuterons. Accelerators have a very broad range of applications in many industrial fields, from high energy physics to medical isotope production. Nuclear technology is one of the fields discussed in this book. The development that has been reached by particle accelerators in energy and particle intensity has opened the possibility to a wide number of new applications in nuclear technology. This book reviews the applications in the nuclear energy field and the design features of high power neutron sources are explained. Surface treatments of niobium flat samples and superconducting radio frequency cavities by a new technique called gas cluster ion beam are also studied in detail, as well as the process of electropolishing. Furthermore, magnetic devises such as solenoids, dipoles and undulators, which ...

  11. Computational Tools for Accelerating Carbon Capture Process Development

    Miller, David; Sahinidis, N V; Cozad, A; Lee, A; Kim, H; Morinelly, J; Eslick, J; Yuan, Z

    2013-06-04

    This presentation reports development of advanced computational tools to accelerate next generation technology development. These tools are to develop an optimized process using rigorous models. They include: Process Models; Simulation-Based Optimization; Optimized Process; Uncertainty Quantification; Algebraic Surrogate Models; and Superstructure Optimization (Determine Configuration).

  12. Learning to play like a human: case injected genetic algorithms for strategic computer gaming

    Louis, Sushil J.; Miles, Chris

    2006-05-01

    We use case injected genetic algorithms to learn how to competently play computer strategy games that involve long range planning across complex dynamics. Imperfect knowledge presented to players requires them adapt their strategies in order to anticipate opponent moves. We focus on the problem of acquiring knowledge learned from human players, in particular we learn general routing information from a human player in the context of a strike force planning game. By incorporating case injection into a genetic algorithm, we show methods for incorporating general knowledge elicited from human players into future plans. In effect allowing the GA to take important strategic elements from human play and merging those elements into its own strategic thinking. Results show that with an appropriate representation, case injection is effective at biasing the genetic algorithm toward producing plans that contain important strategic elements used by human players.

  13. Creating a strategic plan for configuration management using computer aided software engineering (CASE) tools

    This paper provides guidance in the definition, documentation, measurement, enhancement of processes, and validation of a strategic plan for configuration management (CM). The approach and methodology used in establishing a strategic plan is the same for any enterprise, including the Department of Energy (DOE), commercial nuclear plants, the Department of Defense (DOD), or large industrial complexes. The principles and techniques presented are used world wide by some of the largest corporations. The authors used industry knowledge and the areas of their current employment to illustrate and provide examples. Developing a strategic configuration and information management plan for DOE Idaho Field Office (DOE-ID) facilities is discussed in this paper. A good knowledge of CM principles is the key to successful strategic planning. This paper will describe and define CM elements, and discuss how CM integrates the facility's physical configuration, design basis, and documentation. The strategic plan does not need the support of a computer aided software engineering (CASE) tool. However, the use of the CASE tool provides a methodology for consistency in approach, graphics, and database capability combined to form an encyclopedia and a method of presentation that is easily understood and aids the process of reengineering. CASE tools have much more capability than those stated above. Some examples are supporting a joint application development group (JAD) to prepare a software functional specification document and, if necessary, provide the capability to automatically generate software application code. This paper briefly discusses characteristics and capabilities of two CASE tools that use different methodologies to generate similar deliverables

  14. Accelerating Climate and Weather Simulations through Hybrid Computing

    Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark

    2011-01-01

    Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.

  15. Accelerating Neuroimage Registration through Parallel Computation of Similarity Metric.

    Yun-Gang Luo

    Full Text Available Neuroimage registration is crucial for brain morphometric analysis and treatment efficacy evaluation. However, existing advanced registration algorithms such as FLIRT and ANTs are not efficient enough for clinical use. In this paper, a GPU implementation of FLIRT with the correlation ratio (CR as the similarity metric and a GPU accelerated correlation coefficient (CC calculation for the symmetric diffeomorphic registration of ANTs have been developed. The comparison with their corresponding original tools shows that our accelerated algorithms can greatly outperform the original algorithm in terms of computational efficiency. This paper demonstrates the great potential of applying these registration tools in clinical applications.

  16. Quantum computing accelerator I/O : LDRD 52750 final report

    In a superposition of quantum states, a bit can be in both the states '0' and '1' at the same time. This feature of the quantum bit or qubit has no parallel in classical systems. Currently, quantum computers consisting of 4 to 7 qubits in a 'quantum computing register' have been built. Innovative algorithms suited to quantum computing are now beginning to emerge, applicable to sorting and cryptanalysis, and other applications. A framework for overcoming slightly inaccurate quantum gate interactions and for causing quantum states to survive interactions with surrounding environment is emerging, called quantum error correction. Thus there is the potential for rapid advances in this field. Although quantum information processing can be applied to secure communication links (quantum cryptography) and to crack conventional cryptosystems, the first few computing applications will likely involve a 'quantum computing accelerator' similar to a 'floating point arithmetic accelerator' interfaced to a conventional Von Neumann computer architecture. This research is to develop a roadmap for applying Sandia's capabilities to the solution of some of the problems associated with maintaining quantum information, and with getting data into and out of such a 'quantum computing accelerator'. We propose to focus this work on 'quantum I/O technologies' by applying quantum optics on semiconductor nanostructures to leverage Sandia's expertise in semiconductor microelectronic/photonic fabrication techniques, as well as its expertise in information theory, processing, and algorithms. The work will be guided by understanding of practical requirements of computing and communication architectures. This effort will incorporate ongoing collaboration between 9000, 6000 and 1000 and between junior and senior personnel. Follow-on work to fabricate and evaluate appropriate experimental nano/microstructures will be proposed as a result of this work

  17. Quantum computing accelerator I/O : LDRD 52750 final report.

    Schroeppel, Richard Crabtree; Modine, Normand Arthur; Ganti, Anand; Pierson, Lyndon George; Tigges, Christopher P.

    2003-12-01

    In a superposition of quantum states, a bit can be in both the states '0' and '1' at the same time. This feature of the quantum bit or qubit has no parallel in classical systems. Currently, quantum computers consisting of 4 to 7 qubits in a 'quantum computing register' have been built. Innovative algorithms suited to quantum computing are now beginning to emerge, applicable to sorting and cryptanalysis, and other applications. A framework for overcoming slightly inaccurate quantum gate interactions and for causing quantum states to survive interactions with surrounding environment is emerging, called quantum error correction. Thus there is the potential for rapid advances in this field. Although quantum information processing can be applied to secure communication links (quantum cryptography) and to crack conventional cryptosystems, the first few computing applications will likely involve a 'quantum computing accelerator' similar to a 'floating point arithmetic accelerator' interfaced to a conventional Von Neumann computer architecture. This research is to develop a roadmap for applying Sandia's capabilities to the solution of some of the problems associated with maintaining quantum information, and with getting data into and out of such a 'quantum computing accelerator'. We propose to focus this work on 'quantum I/O technologies' by applying quantum optics on semiconductor nanostructures to leverage Sandia's expertise in semiconductor microelectronic/photonic fabrication techniques, as well as its expertise in information theory, processing, and algorithms. The work will be guided by understanding of practical requirements of computing and communication architectures. This effort will incorporate ongoing collaboration between 9000, 6000 and 1000 and between junior and senior personnel. Follow-on work to fabricate and evaluate appropriate experimental nano/microstructures will be

  18. COMPASS, the COMmunity Petascale project for Accelerator Science and Simulation, a board computational accelerator physics initiative

    Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction

  19. COMPASS, the COMmunity Petascale Project for Accelerator Science and Simulation, a broad computational accelerator physics initiative

    J.R. Cary; P. Spentzouris; J. Amundson; L. McInnes; M. Borland; B. Mustapha; B. Norris; P. Ostroumov; Y. Wang; W. Fischer; A. Fedotov; I. Ben-Zvi; R. Ryne; E. Esarey; C. Geddes; J. Qiang; E. Ng; S. Li; C. Ng; R. Lee; L. Merminga; H. Wang; D.L. Bruhwiler; D. Dechow; P. Mullowney; P. Messmer; C. Nieter; S. Ovtchinnikov; K. Paul; P. Stoltz; D. Wade-Stein; W.B. Mori; V. Decyk; C.K. Huang; W. Lu; M. Tzoufras; F. Tsung; M. Zhou; G.R. Werner; T. Antonsen; T. Katsouleas

    2007-06-01

    Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction.

  20. COMPASS, the COMmunity Petascale project for Accelerator Science and Simulation, a board computational accelerator physics initiative

    Cary, J.R.; Spentzouris, P.; Amundson, J.; McInnes, L.; Borland, M.; Mustapha, B.; Ostroumov, P.; Wang, Y.; Fischer, W.; Fedotov, A.; Ben-Zvi, I.; Ryne, R.; Esarey, E.; Geddes, C.; Qiang, J.; Ng, E.; Li, S.; Ng, C.; Lee, R.; Merminga, L.; Wang, H.; Bruhwiler, D.L.; Dechow, D.; Mullowney, P.; Messmer, P.; Nieter, C.; Ovtchinnikov, S.; Paul, K.; Stoltz, P.; Wade-Stein, D.; Mori, W.B.; Decyk, V.; Huang, C.K.; Lu, W.; Tzoufras, M.; Tsung, F.; Zhou, M.; Werner, G.R.; Antonsen, T.; Katsouleas, T.; Morris, B.

    2007-07-16

    Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction.

  1. COMPASS, the COMmunity Petascale Project for Accelerator Science And Simulation, a Broad Computational Accelerator Physics Initiative

    Cary, J.R.; /Tech-X, Boulder /Colorado U.; Spentzouris, P.; Amundson, J.; /Fermilab; McInnes, L.; Borland, M.; Mustapha, B.; Norris, B.; Ostroumov, P.; Wang, Y.; /Argonne; Fischer, W.; Fedotov, A.; Ben-Zvi, I.; /Brookhaven; Ryne, R.; Esarey, E.; Geddes, C.; Qiang, J.; Ng, E.; Li, S.; /LBL, Berkeley; Ng, C.; Lee, R.; /SLAC; Merminga, L.; /Jefferson Lab /Tech-X, Boulder /UCLA /Colorado U. /Maryland U. /Southern California U.

    2007-11-09

    Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction.

  2. Personal computer control system for small size tandem accelerator

    As the analysis apparatus using tandem accelerator has a lot of control parameter, numbers of control parts set on control panel are so many to make the panel more complex and its operativity worse. In order to improve these faults, development and design of a control system using personal computer for the control panel mainly constituted by conventional hardware parts were tried. Their predominant characteristics are shown as follows: 1) To make the control panel construction simpler and more compact, because the hardware device on the panel surface becomes the smallest limit as required by using a personal computer for man-machine interface. 2) To make control speed more rapid, because sequence control is closed within each block by driving accelerator system to each block and installing local station of the sequencer network at each block. 3) To make expandability larger, because of few improvement of the present hardware by interrupting the sequencer local station into the net and correcting image of the computer when increasing a new beamline. And, 4) to make control system cheaper, because of cheaper investment and easier programming by using the personal computer. (G.K.)

  3. Sharing of computer codes and data for accelerator shield modelling

    The Radiation Shielding Information Center (RSIC) and the NEA Data Bank (DB) acquire, verify and distribute computer programs and data sets which are needed by the communities working in nuclear research and applications. Programs and Data are shared through cooperative arrangements at the international level in order to avoid uneconomical duplication of efforts. These activities respond to needs emerging from national programmes and expressed by the users. This paper addresses explicitly the field of accelerator shield modelling and the available cross section data and computer programs required for the purpose. It suggests that international cooperation between the centres and participants in this field should be strengthened. Relevant computer programs are being benchmarked against experiments and the Centers are promoting and activity for collecting them in a computerized data base for easy access. (authors). 1 ref

  4. Towards full automation of accelerators through computer control

    Gamble, J; Kemp, D; Keyser, R; Koutchouk, Jean-Pierre; Martucci, P P; Tausch, Lothar A; Vos, L

    1980-01-01

    The computer control system of the Intersecting Storage Rings (ISR) at CERN has always laid emphasis on two particular operational aspects, the first being the reproducibility of machine conditions and the second that of giving the operators the possibility to work in terms of machine parameters such as the tune. Already certain phases of the operation are optimized by the control system, whilst others are automated with a minimum of manual intervention. The authors describe this present control system with emphasis on the existing automated facilities and the features of the control system which make it possible. It then discusses the steps needed to completely automate the operational procedure of accelerators. (7 refs).

  5. Accelerating MATLAB with GPU computing a primer with examples

    Suh, Jung W

    2013-01-01

    Beyond simulation and algorithm development, many developers increasingly use MATLAB even for product deployment in computationally heavy fields. This often demands that MATLAB codes run faster by leveraging the distributed parallelism of Graphics Processing Units (GPUs). While MATLAB successfully provides high-level functions as a simulation tool for rapid prototyping, the underlying details and knowledge needed for utilizing GPUs make MATLAB users hesitate to step into it. Accelerating MATLAB with GPUs offers a primer on bridging this gap. Starting with the basics, setting up MATLAB for

  6. Distance Computation Between Non-Holonomic Motions with Constant Accelerations

    Enrique J. Bernabeu

    2013-09-01

    Full Text Available A method for computing the distance between two moving robots or between a mobile robot and a dynamic obstacle with linear or arc‐like motions and with constant accelerations is presented in this paper. This distance is obtained without stepping or discretizing the motions of the robots or obstacles. The robots and obstacles are modelled by convex hulls. This technique obtains the future instant in time when two moving objects will be at their minimum translational distance ‐ i.e., at their minimum separation or maximum penetration (if they will collide. This distance and the future instant in time are computed in parallel. This method is intended to be run each time new information from the world is received and, consequently, it can be used for generating collision‐free trajectories for non‐holonomic mobile robots.

  7. Accelerators and Beams, multimedia computer-based training in accelerator physics

    We are developing a set of computer-based tutorials on accelerators and charged-particle beams under an SBIR grant from the DOE. These self-paced, interactive tutorials, available for Macintosh and Windows platforms, use multimedia techniques to enhance the user close-quote s rate of learning and length of retention of the material. They integrate interactive On-Screen Laboratories, hypertext, line drawings, photographs, two- and three-dimensional animations, video, and sound. They target a broad audience, from undergraduates or technicians to professionals. Presently, three modules have been published (Vectors, Forces, and Motion), a fourth (Dipole Magnets) has been submitted for review, and three more exist in prototype form (Quadrupoles, Matrix Transport, and Properties of Charged-Particle Beams). Participants in the poster session will have the opportunity to try out these modules on a laptop computer. copyright 1999 American Institute of Physics

  8. ''Accelerators and Beams,'' multimedia computer-based training in accelerator physics

    We are developing a set of computer-based tutorials on accelerators and charged-particle beams under an SBIR grant from the DOE. These self-paced, interactive tutorials, available for Macintosh and Windows platforms, use multimedia techniques to enhance the user's rate of learning and length of retention of the material. They integrate interactive ''On-Screen Laboratories,'' hypertext, line drawings, photographs, two- and three-dimensional animations, video, and sound. They target a broad audience, from undergraduates or technicians to professionals. Presently, three modules have been published (Vectors, Forces, and Motion), a fourth (Dipole Magnets) has been submitted for review, and three more exist in prototype form (Quadrupoles, Matrix Transport, and Properties of Charged-Particle Beams). Participants in the poster session will have the opportunity to try out these modules on a laptop computer

  9. Computer network for on-lne control system of the IHEP ring accelerator

    A block-diagram for computer network of the IHEP ring accelerator control system is substantiated. The interface card for ES-1010 computer is described, it operates simultaneously on 4 channels. The system software for computer network is considered

  10. Computer codes and methods for simulating accelerator driven systems

    A large set of computer codes and associated data libraries have been developed by nuclear research and industry over the past half century. A large number of them are in the public domain and can be obtained under agreed conditions from different Information Centres. The areas covered comprise: basic nuclear data and models, reactor spectra and cell calculations, static and dynamic reactor analysis, criticality, radiation shielding, dosimetry and material damage, fuel behaviour, safety and hazard analysis, heat conduction and fluid flow in reactor systems, spent fuel and waste management (handling, transportation, and storage), economics of fuel cycles, impact on the environment of nuclear activities etc. These codes and models have been developed mostly for critical systems used for research or power generation and other technological applications. Many of them have not been designed for accelerator driven systems (ADS), but with competent use, they can be used for studying such systems or can form the basis for adapting existing methods to the specific needs of ADS's. The present paper describes the types of methods, codes and associated data available and their role in the applications. It provides Web addresses for facilitating searches for such tools. Some indications are given on the effect of non appropriate or 'blind' use of existing tools to ADS. Reference is made to available experimental data that can be used for validating the methods use. Finally, some international activities linked to the different computational aspects are described briefly. (author)

  11. Accelerating Battery Design Using Computer-Aided Engineering Tools: Preprint

    Pesaran, A.; Heon, G. H.; Smith, K.

    2011-01-01

    Computer-aided engineering (CAE) is a proven pathway, especially in the automotive industry, to improve performance by resolving the relevant physics in complex systems, shortening the product development design cycle, thus reducing cost, and providing an efficient way to evaluate parameters for robust designs. Academic models include the relevant physics details, but neglect engineering complexities. Industry models include the relevant macroscopic geometry and system conditions, but simplify the fundamental physics too much. Most of the CAE battery tools for in-house use are custom model codes and require expert users. There is a need to make these battery modeling and design tools more accessible to end users such as battery developers, pack integrators, and vehicle makers. Developing integrated and physics-based CAE battery tools can reduce the design, build, test, break, re-design, re-build, and re-test cycle and help lower costs. NREL has been involved in developing various models to predict the thermal and electrochemical performance of large-format cells and has used in commercial three-dimensional finite-element analysis and computational fluid dynamics to study battery pack thermal issues. These NREL cell and pack design tools can be integrated to help support the automotive industry and to accelerate battery design.

  12. How endemic countries can accelerate lymphatic filariasis elimination? An analytical review identify strategic and programmatic interventions

    Chandrakant Lahariya & Shailendra S. Tomar

    2011-03-01

    Full Text Available Lymphatic filariasis (LF is endemic in 81 countries in the world, and a number of these countries have targetedfor LF elimination. This review of literature and analysis was conducted to identify additional and sustainablestrategies to accelerate LF elimination from endemic countries. This review noted that adverse events due tomass drug administration (MDA of diethyl carbamazine (DEC tablets, poor knowledge and information aboutLF amongst health workers & community members, and limited focus on information, education & communication(IEC activities and interpersonal communication are the major barriers in LF elimination. The new approachesto increase compliance with DEC tablets (including exploring the possibility for DEC fortification of salt,targeted education programmes for physicians and health workers, and IEC material and inter personalcommunication to improve the knowledge of community are immediately required. There is a renewed andpressing need to conduct operational research, evolve sustainable and institutional mechanisms for education ofphysicians and health workers, ensure quality of trainings on MDA, strengthen IEC delivery mechanisms,implement internal and external monitoring of MDA activities, sufficient funding in timely manner, and toimprove political and programmatic leadership. It is also time that lessons from other elimination programmesare utilized to accelerate targeted LF elimination from the endemic countries.

  13. Modern computer networks and distributed intelligence in accelerator controls

    Appropriate hardware and software network protocols are surveyed for accelerator control environments. Accelerator controls network topologies are discussed with respect to the following criteria: vertical versus horizontal and distributed versus centralized. Decision-making considerations are provided for accelerator network architecture specification. Current trends and implementations at Fermilab are discussed

  14. Computation of Normal Conducting and Superconducting Linear Accelerator (LINAC) Availabilities

    A brief study was conducted to roughly estimate the availability of a superconducting (SC) linear accelerator (LINAC) as compared to a normal conducting (NC) one. Potentially, SC radio frequency cavities have substantial reserve capability, which allows them to compensate for failed cavities, thus increasing the availability of the overall LINAC. In the initial SC design, there is a klystron and associated equipment (e.g., power supply) for every cavity of an SC LINAC. On the other hand, a single klystron may service eight cavities in the NC LINAC. This study modeled that portion of the Spallation Neutron Source LINAC (between 200 and 1,000 MeV) that is initially proposed for conversion from NC to SC technology. Equipment common to both designs was not evaluated. Tabular fault-tree calculations and computer-event-driven simulation (EDS) computer computations were performed. The estimated gain in availability when using the SC option ranges from 3 to 13% under certain equipment and conditions and spatial separation requirements. The availability of an NC LINAC is estimated to be 83%. Tabular fault-tree calculations and computer EDS modeling gave the same 83% answer to within one-tenth of a percent for the NC case. Tabular fault-tree calculations of the availability of the SC LINAC (where a klystron and associated equipment drive a single cavity) give 97%, whereas EDS computer calculations give 96%, a disagreement of only 1%. This result may be somewhat fortuitous because of limitations of tabular fault-tree calculations. For example, tabular fault-tree calculations can not handle spatial effects (separation distance between failures), equipment network configurations, and some failure combinations. EDS computer modeling of various equipment configurations were examined. When there is a klystron and associated equipment for every cavity and adjacent cavity, failure can be tolerated and the SC availability was estimated to be 96%. SC availability decreased as

  15. Recent Improvements to CHEF, a Framework for Accelerator Computations

    CHEF is body of software dedicated to accelerator related computations. It consists of a hierarchical set of libraries and a stand-alone application based on the latter. The implementation language is C++; the code makes extensive use of templates and modern idioms such as iterators, smart pointers and generalized function objects. CHEF has been described in a few contributions at previous conferences. In this paper, we provide an overview and discuss recent improvements. Formally, CHEF refers to two distinct but related things: (1) a set of class libraries; and (2) a stand-alone application based on these libraries. The application makes use of and exposes a subset of the capabilities provided by the libraries. CHEF has its ancestry in efforts started in the early nineties. At that time, A. Dragt, E. Forest [2] and others showed that ring dynamics can be formulated in a way that puts maps rather than Hamiltonians, into a central role. Automatic differentiation (AD) techniques, which were just coming of age, were a natural fit in a context where maps are represented by their Taylor approximations. The initial vision, which CHEF carried over, was to develop a code that (1) concurrently supports conventional tracking, linear and non-linear map-based techniques (2) avoids 'hardwired' approximations that are not under user control (3) provides building blocks for applications. C++ was adopted as the implementation language because of its comprehensive support for operator overloading and the equal status it confers to built-in and user-defined data types. It should be mentioned that acceptance of AD techniques in accelerator science owes much to the pioneering work of Berz [1] who implemented--in fortran--the first production quality AD engine (the foundation for the code COSY). Nowadays other engines are available, but few are native C++ implementations. Although AD engines and map based techniques are making their way into more traditional codes e.g. [5], it is also

  16. The computer based patient record: a strategic issue in process innovation.

    Sicotte, C; Denis, J L; Lehoux, P

    1998-12-01

    Reengineering of the workplace through Information Technology is an important strategic issue for today's hospitals. The computer-based patient record (CPR) is one technology that has the potential to profoundly modify the work routines of the care unit. This study investigates a CPR project aimed at allowing physicians and nurses to work in a completely electronic environment. The focus of our analysis was the patient nursing care process. The rationale behind the introduction of this technology was based on its alleged capability to both enhance quality of care and control costs. This is done by better managing the flow of information within the organization and by introducing mechanisms such as the timeless and spaceless organization of the work place, de-localization, and automation of work processes. The present case study analyzed the implementation of a large CPR project ($45 million U.S.) conducted in four hospitals in joint venture with two computer firms. The computerized system had to be withdrawn because of boycotts from both the medical and nursing personnel. User-resistance was not the problem. Despite its failure, this project was a good opportunity to understand better the intricate complexity of introducing technology in professional work where the usefulness of information is short lived and where it is difficult to predetermine the relevancy of information. Profound misconceptions in achieving a tighter fit (synchronization) between care processes and information processes were the main problems. PMID:9871877

  17. Computational means of the new control system for the U-70 accelerating complex

    Computational means of the new control system (CS) of the U-70 accelerating complex are described. The last includes the LU-30 linear accelerator, U-15 booster ring injector, U-70 main accelerator, systems for fast and slow beam extraction. The new integrated CS is based on the standard three-level architecture. Control of the CS network is realized with a special computer, fulfilling also the security functions

  18. Reactor and /or accelerator: general remarks on strategic considerations in sourcing/producing radiopharmaceuticals and radiotracer for the Philippines

    The most important sources of radionuclides in the world are particle accelerators and nuclear reactors. Since the late 1940's many radiotracers and radiopharmaceuticals have been innovated and conceived, designed, produced and applied in important industrial and clinical/ biomedical settings. For example in the health area, reactor-produced radionuclides have become indispensable for diagnostic imaging involving, in its most recent and advanced development, radioimmunoscintigraphy, which exploits the exquisite ligand-specificity of monoclonal antibodies, reagents which in turn are the products of advances in biotechnology. Thus far, one of the most indispensable radiopharmaceuticals has been 99mTc, which is usually obtained as a daughter decay product of 99Mo. In January 1991, some questions about the stability of the worldwide commercial supply of 99Mo became highlighted when the major commercial world producer of 99Mo, Nordion International, shut down its facilities temporarily in Canada due to contamination in its main reactor building (see for instance relevant newsbrief in J. Nuclear Medicine (1991): 'Industry agrees to join DOE study of domestic moly-99 production'). With the above background, my remarks will attempt to open discussions on strategic considerations relevant to questions of 'self reliance' in radiotracers/radiopharmaceutical production in the Philippines. For instance, the relevant question of sourcing local radionuclide needs from a fully functioning multipurpose cyclotron facility within the country that will then supply the needs of the local industrial, biomedical (including research) and health sectors; and possibly, eventually acquiring the capability to export to nearby countries longer-lived radiotracers and radiopharmaceuticals

  19. Computer codes for particle accelerator design and analysis: A compendium. Second edition

    The design of the next generation of high-energy accelerators will probably be done as an international collaborative efforts and it would make sense to establish, either formally or informally, an international center for accelerator codes with branches for maintenance, distribution, and consultation at strategically located accelerator centers around the world. This arrangement could have at least three beneficial effects. It would cut down duplication of effort, provide long-term support for the best codes, and provide a stimulating atmosphere for the evolution of new codes. It does not take much foresight to see that the natural evolution of accelerator design codes is toward the development of so-called Expert Systems, systems capable of taking design specifications of future accelerators and producing specifications for optimized magnetic transport and acceleration components, making a layout, and giving a fairly impartial cost estimate. Such an expert program would use present-day programs such as TRANSPORT, POISSON, and SUPERFISH as tools in the optimization process. Such a program would also serve to codify the experience of two generations of accelerator designers before it is lost as these designers reach retirement age. This document describes 203 codes that originate from 10 countries and are currently in use. The authors feel that this compendium will contribute to the dialogue supporting the international collaborative effort that is taking place in the field of accelerator physics today

  20. Computer Based Dose Control System on Linear Accelerator

    The accelerator technology has been used for radio therapy. DokterKaryadi Hospital in Semarang use electron or X-ray linear accelerator (Linac)for cancer therapy. One of the control parameter of linear accelerator isdose rate. It is particle current or amount of photon rate to the target. Thecontrol of dose rate in linac have been done by adjusting repetition rate ofanode pulse train of electron source. Presently the control is stillproportional control. To enhance the quality of the control result (minimalstationer error, velocity and stability), the dose control system has beendesigned by using the PID (Proportional Integral Differential) controlalgorithm and the derivation of transfer function of control object.Implementation of PID algorithm control system is done by giving an input ofdose error (the different between output dose and dose rate set point). Theoutput of control system is used for correction of repetition rate set pointfrom pulse train of electron source anode. (author)

  1. Cloud Computing and Validated Learning for Accelerating Innovation in IoT

    Suciu, George; Todoran, Gyorgy; Vulpe, Alexandru; Suciu, Victor; Bulca, Cristina; Cheveresan, Romulus

    2015-01-01

    Innovation in Internet of Things (IoT) requires more than just creation of technology and use of cloud computing or big data platforms. It requires accelerated commercialization or aptly called go-to-market processes. To successfully accelerate, companies need a new type of product development, the so-called validated learning process.…

  2. Modeling Strategic Use of Human Computer Interfaces with Novel Hidden Markov Models

    Laura Jane Mariano

    2015-07-01

    Full Text Available Immersive software tools are virtual environments designed to give their users an augmented view of real-world data and ways of manipulating that data. As virtual environments, every action users make while interacting with these tools can be carefully logged, as can the state of the software and the information it presents to the user, giving these actions context. This data provides a high-resolution lens through which dynamic cognitive and behavioral processes can be viewed. In this report, we describe new methods for the analysis and interpretation of such data, utilizing a novel implementation of the Beta Process Hidden Markov Model (BP-HMM for analysis of software activity logs. We further report the results of a preliminary study designed to establish the validity of our modeling approach. A group of 20 participants were asked to play a simple computer game, instrumented to log every interaction with the interface. Participants had no previous experience with the game’s functionality or rules, so the activity logs collected during their naïve interactions capture patterns of exploratory behavior and skill acquisition as they attempted to learn the rules of the game. Pre- and post-task questionnaires probed for self-reported styles of problem solving, as well as task engagement, difficulty, and workload. We jointly modeled the activity log sequences collected from all participants using the BP-HMM approach, identifying a global library of activity patterns representative of the collective behavior of all the participants. Analyses show systematic relationships between both pre- and post-task questionnaires, self-reported approaches to analytic problem solving, and metrics extracted from the BP-HMM decomposition. Overall, we find that this novel approach to decomposing unstructured behavioral data within software environments provides a sensible means for understanding how users learn to integrate software functionality for strategic

  3. Electromagnetic field computation and optimization in accelerator dipole magnets. Doctoral thesis

    Ikaeheimo, J.

    1996-03-01

    Contents: Introduction; Dipole magnetic in particle accelerators; Field computation; Optimization of the straight section of the coil; Optimization of the end section of the coil; Adaptive mesh generation; Conclusions.

  4. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing

    Fan Zhang; Guojun Li; Wei Li; Wei Hu; Yuxin Hu

    2016-01-01

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data ...

  5. Combined Compute and Storage: Configurable Memristor Arrays to Accelerate Search

    Liu, Yang; Dwyer, Chris; Lebeck, Alvin R.

    2016-01-01

    Emerging technologies present opportunities for system designers to meet the challenges presented by competing trends of big data analytics and limitations on CMOS scaling. Specifically, memristors are an emerging high-density technology where the individual memristors can be used as storage or to perform computation. The voltage applied across a memristor determines its behavior (storage vs. compute), which enables a configurable memristor substrate that can embed computation with storage. T...

  6. Acceleration of matrix element computations for precision measurements

    Brandt, Oleg; Wang, Michael H L S; Ye, Zhenyu

    2014-01-01

    The matrix element technique provides a superior statistical sensitivity for precision measurements of important parameters at hadron colliders, such as the mass of the top quark or the cross section for the production of Higgs bosons. The main practical limitation of the technique is its high computational demand. Using the concrete example of the top quark mass, we present two approaches to reduce the computation time of the technique by two orders of magnitude. First, we utilize low-discrepancy sequences for numerical Monte Carlo integration in conjunction with a dedicated estimator of numerical uncertainty, a novelty in the context of the matrix element technique. Second, we utilize a new approach that factorizes the overall jet energy scale from the matrix element computation, a novelty in the context of top quark mass measurements. The utilization of low-discrepancy sequences is of particular general interest, as it is universally applicable to Monte Carlo integration, and independent of the computing e...

  7. Acceleration of matrix element computations for precision measurements

    The matrix element technique provides a superior statistical sensitivity for precision measurements of important parameters at hadron colliders, such as the mass of the top quark or the cross-section for the production of Higgs bosons. The main practical limitation of the technique is its high computational demand. Using the concrete example of the top quark mass, we present two approaches to reduce the computation time of the technique by a factor of 90. First, we utilize low-discrepancy sequences for numerical Monte Carlo integration in conjunction with a dedicated estimator of numerical uncertainty, a novelty in the context of the matrix element technique. Second, we utilize a new approach that factorizes the overall jet energy scale from the matrix element computation, a novelty in the context of top quark mass measurements. The utilization of low-discrepancy sequences is of particular general interest, as it is universally applicable to Monte Carlo integration, and independent of the computing environment

  8. Acceleration of matrix element computations for precision measurements

    Brandt, O., E-mail: obrandt@fnal.gov [II. Physikalisches Institut, Georg-August-Universität Göttingen, Göttingen (Germany); Gutierrez, G.; Wang, M.H.L.S. [Fermi National Accelerator Laboratory, Batavia, IL 60510 (United States); Ye, Z. [University of Illinois at Chicago, Chicago, IL 60607 (United States)

    2015-03-01

    The matrix element technique provides a superior statistical sensitivity for precision measurements of important parameters at hadron colliders, such as the mass of the top quark or the cross-section for the production of Higgs bosons. The main practical limitation of the technique is its high computational demand. Using the concrete example of the top quark mass, we present two approaches to reduce the computation time of the technique by a factor of 90. First, we utilize low-discrepancy sequences for numerical Monte Carlo integration in conjunction with a dedicated estimator of numerical uncertainty, a novelty in the context of the matrix element technique. Second, we utilize a new approach that factorizes the overall jet energy scale from the matrix element computation, a novelty in the context of top quark mass measurements. The utilization of low-discrepancy sequences is of particular general interest, as it is universally applicable to Monte Carlo integration, and independent of the computing environment.

  9. Lua(Jit) for computing accelerator beam physics

    CERN. Geneva

    2016-01-01

    As mentioned in the 2nd developers meeting, I would like to open the debate with a special presentation on another language - Lua, and a tremendous technology - LuaJit. Lua is much less known at CERN, but it is very simple, much smaller than Python and its JIT is extremely performant. The language is a dynamic scripting language easy to learn and easy to embedded in applications. I will show how we use it in HPC for accelerator beam physics as a replacement for C, C++, Fortran and Python, with some benchmarks versus Python, PyPy4 and C/C++.

  10. Computer Simulation in Mass Emergency and Disaster Response: An Evaluation of Its Effectiveness as a Tool for Demonstrating Strategic Competency in Emergency Department Medical Responders

    O'Reilly, Daniel J.

    2011-01-01

    This study examined the capability of computer simulation as a tool for assessing the strategic competency of emergency department nurses as they responded to authentically computer simulated biohazard-exposed patient case studies. Thirty registered nurses from a large, urban hospital completed a series of computer-simulated case studies of…

  11. Convergence acceleration and shock fitting for transonic aerodynamics computations

    Hafez, M. M.; Cheng, H. K.

    1975-01-01

    Two problems in computational fluid dynamics are studied in the context of transonic small-disturbance theory - namely, (1) how to speed up the convergence for currently available iterative procedures, and (2) how a shock-fitting method may be adapted to existing relaxation procedures with minimal alterations in computer programming and storage requirements. The paper contributes to a clarification of error analyses for sequence transformations based on the power method (including also the nonlinear transforms of Aitken, Shanks, and Wilkinson), and to developing a cyclic iterative procedure applying the transformations. Examples testing the procedure for a model Dirichlet problem and for a transonic airfoil problem show that savings in computer time by a factor of three to five are generally possible, depending on accuracy requirements and the particular iterative procedure used.-

  12. Computational algorithms for multiphase magnetohydrodynamics and applications to accelerator targets

    R.V. Samulyak

    2010-01-01

    Full Text Available An interface-tracking numerical algorithm for the simulation of magnetohydrodynamic multiphase/free surface flows in the low-magnetic-Reynolds-number approximation of (Samulyak R., Du J., Glimm J., Xu Z., J. Comp. Phys., 2007, 226, 1532 is described. The algorithm has been implemented in multi-physics code FronTier and used for the simulation of MHD processes in liquids and weakly ionized plasmas. In this paper, numerical simulations of a liquid mercury jet entering strong and nonuniform magnetic field and interacting with a powerful proton pulse have been performed and compared with experiments. Such a mercury jet is a prototype of the proposed Muon Collider/Neutrino Factory, a future particle accelerator. Simulations demonstrate the elliptic distortion of the mercury jet as it enters the magnetic solenoid at a small angle to the magnetic axis, jet-surface instabilities (filamentation induced by the interaction with proton pulses, and the stabilizing effect of the magnetic field.

  13. Computational Science Guides and Accelerates Hydrogen Research (Fact Sheet)

    2010-12-01

    This fact sheet describes NREL's accomplishments in using computational science to enhance hydrogen-related research and development in areas such as storage and photobiology. Work was performed by NREL's Chemical and Materials Science Center and Biosciences Center.

  14. Computer and network applications in beam measurement system of accelerator

    The applications of computer and its network in beam measurement system for Beijing Electron Positron Collider (BEPC) are described. It includes the instrumentation interfaces, the hardware and software implementations for the network connection between microcomputers and VAX series minicomputers. The communication program using Windows socket, a network programming interface for Microsoft Windows, are also described

  15. Computation of Eigenmodes in Long and Complex Accelerating Structures by Means of Concatenation Strategies

    Fligsen, T; Van Rienen, U

    2014-01-01

    The computation of eigenmodes for complex accelerating structures is a challenging and important task for the design and operation of particle accelerators. Discretizing long and complex structures to determine its eigenmodes leads to demanding computations typically performed on super computers. This contribution presents an application example of a method to compute eigenmodes and other parameters derived from these eigenmodes for long and complex structures using standard workstation computers. This is accomplished by the decomposition of the complex structure into several single segments. In a next step, the electromagnetic properties of the segments are described in terms of a compact state-space model. Subsequently, the state-space models of the single structures are concatenated to the full structure. The results of direct calculations are compared with results obtained by the concatenation scheme in terms of computational time and accuracy.

  16. Computer Architecture with Associative Processor Replacing Last Level Cache and SIMD Accelerator

    Yavits, Leonid; Morad, Amir; Ginosar, Ran

    2013-01-01

    This study presents a novel computer architecture where a last level cache and a SIMD accelerator are replaced by an Associative Processor. Associative Processor combines data storage and data processing and provides parallel computational capabilities and data memory at the same time. An analytic performance model of the new computer architecture is introduced. Comparative analysis supported by simulation shows that this novel architecture may outperform a conventional architecture comprisin...

  17. Accelerator

    The invention claims equipment for stabilizing the position of the front covers of the accelerator chamber in cyclic accelerators which significantly increases accelerator reliability. For stabilizing, it uses hydraulic cushions placed between the electromagnet pole pieces and the front chamber covers. The top and the bottom cushions are hydraulically connected. The cushions are disconnected and removed from the hydraulic line using valves. (J.P.)

  18. Computer control of large accelerators design concepts and methods

    Unlike most of the specialities treated in this volume, control system design is still an art, not a science. These lectures are an attempt to produce a primer for prospective practitioners of this art. A large modern accelerator requires a comprehensive control system for commissioning, machine studies and day-to-day operation. Faced with the requirement to design a control system for such a machine, the control system architect has a bewildering array of technical devices and techniques at his disposal, and it is our aim in the following chapters to lead him through the characteristics of the problems he will have to face and the practical alternatives available for solving them. We emphasize good system architecture using commercially available hardware and software components, but in addition we discuss the actual control strategies which are to be implemented since it is at the point of deciding what facilities shall be available that the complexity of the control system and its cost are implicitly decided. 19 references

  19. Advanced Computational Models for Accelerator-Driven Systems

    In the nuclear engineering scientific community, Accelerator Driven Systems (ADSs) have been proposed and investigated for the transmutation of nuclear waste, especially plutonium and minor actinides. These fuels have a quite low effective delayed neutron fraction relative to uranium fuel, therefore the subcriticality of the core offers a unique safety feature with respect to critical reactors. The intrinsic safety of ADS allows the elimination of the operational control rods, hence the reactivity excess during burnup can be managed by the intensity of the proton beam, fuel shuffling, and eventually by burnable poisons. However, the intrinsic safety of a subcritical system does not guarantee that ADSs are immune from severe accidents (core melting), since the decay heat of an ADS is very similar to the one of a critical system. Normally, ADSs operate with an effective multiplication factor between 0.98 and 0.92, which means that the spallation neutron source contributes little to the neutron population. In addition, for 1 GeV incident protons and lead-bismuth target, about 50% of the spallation neutrons has energy below 1 MeV and only 15% of spallation neutrons has energies above 3 MeV. In the light of these remarks, the transmutation performances of ADS are very close to those of critical reactors.

  20. Computer control of large accelerators design concepts and methods

    Beck, F.; Gormley, M.

    1984-05-01

    Unlike most of the specialities treated in this volume, control system design is still an art, not a science. These lectures are an attempt to produce a primer for prospective practitioners of this art. A large modern accelerator requires a comprehensive control system for commissioning, machine studies and day-to-day operation. Faced with the requirement to design a control system for such a machine, the control system architect has a bewildering array of technical devices and techniques at his disposal, and it is our aim in the following chapters to lead him through the characteristics of the problems he will have to face and the practical alternatives available for solving them. We emphasize good system architecture using commercially available hardware and software components, but in addition we discuss the actual control strategies which are to be implemented since it is at the point of deciding what facilities shall be available that the complexity of the control system and its cost are implicitly decided. 19 references.

  1. Examination of the relationship between Sustainable Competitive Advantage and Strategic Leadership in the Computer Industry: based on the evaluation and analysis of Dell and HP

    Guan, Yueyao

    2008-01-01

    Abstract Based on the evaluation and analysis of two predominate companies�¢���� performances, Hewlett-Packard and Dell Inc, in the computer industry. This paper has found that a right strategic move at the right time is crucial but only makes half of the story. In order to achieve long-term successes, strategic leaders must make sure the compatibility between the corporate strategy and the competitive advantage exists before a major strategic change is made....

  2. Modern hardware architectures accelerate porous media flow computations

    Kulczewski, Michal; Kurowski, Krzysztof; Kierzynka, Michal; Dohnalik, Marek; Kaczmarczyk, Jan; Borujeni, Ali Takbiri

    2012-05-01

    Investigation of rock properties, porosity and permeability particularly, which determines transport media characteristic, is crucial to reservoir engineering. Nowadays, micro-tomography (micro-CT) methods allow to obtain vast of petro-physical properties. The micro-CT method facilitates visualization of pores structures and acquisition of total porosity factor, determined by sticking together 2D slices of scanned rock and applying proper absorption cut-off point. Proper segmentation of pores representation in 3D is important to solve the permeability of porous media. This factor is recently determined by the means of Computational Fluid Dynamics (CFD), a popular method to analyze problems related to fluid flows, taking advantage of numerical methods and constantly growing computing powers. The recent advent of novel multi-, many-core and graphics processing unit (GPU) hardware architectures allows scientists to benefit even more from parallel processing and built-in new features. The high level of parallel scalability offers both, the time-to-solution decrease and greater accuracy - top factors in reservoir engineering. This paper aims to present research results related to fluid flow simulations, particularly solving the total porosity and permeability of porous media, taking advantage of modern hardware architectures. In our approach total porosity is calculated by the means of general-purpose computing on multiple GPUs. This application sticks together 2D slices of scanned rock and by the means of a marching tetrahedra algorithm, creates a 3D representation of pores and calculates the total porosity. Experimental results are compared with data obtained via other popular methods, including Nuclear Magnetic Resonance (NMR), helium porosity and nitrogen permeability tests. Then CFD simulations are performed on a large-scale high performance hardware architecture to solve the flow and permeability of porous media. In our experiments we used Lattice Boltzmann

  3. On the computation of electromagnetic fields excited by relativistic bunches of charged particles in accelerating structures

    A numerical method is described for the calculaion of electromagnetic fields excited by arbitrarily shaped bunches of charged particles travelling through accelerating structures with cylindrical symmetry. The fields are computed by numerical integration of Maxwell's equations in the time domain. The computer program based on this method enables the user to calculate transient electromagnetic fields as well as the energy gain of particles inside the bunch. Some results are given for the LEP cavity and a pillbox. (orig.)

  4. Accelerating unstructured finite volume computations on field-programmable gate arrays

    Nagy, Zoltan; Nemes, Csaba; Hiba, Antal; Csik, Arpad; Kiss, Andras; Ruszinko, Miklos; Szolgay, Peter

    2014-01-01

    Accurate simulations of various physical processes on digital computers requires huge computing performance, therefore accelerating these scientific and engineering applications has a great importance. Density of programmable logic devices doubles in every 18 months according to Moore's Law. On the recent devices around one hundred double precision floating-point adders and multipliers can be implemented. In the paper an FPGA based framework is described to efficiently utilize this huge compu...

  5. Enterprise-process: computer-based application for obtaining a process-organisation matrix during strategic information system planning

    José Alirio Rondón

    2010-04-01

    Full Text Available A lot of material has been published about strategic information system planning (SISP methodologies. These methods are designed to help information system planners to integrate their strategies with organisational stra-tegies. Classic business system planning for strategical alignment (BSP/SA theory stands out because it provides information systems with a reactive role regarding an organisation’s objectives and strategy. BSP/SA has been described in terms of phases and the specific tasks within them. This work was aimed at presenting a computer-based application automating one of the most important tasks in BSP/SA methodology (process-organisation matrix. This matrix allows storing information about the levels of present responsibilities in positions and processes. Automating this task has facilitated students’ analysing the process-organisation matrix during SISP workshops forming part of the Systems Management course (Systems Engineering, Universidad Nacional de Colombia. Improved results have thus arisen from such workshops. The present work aims to motivate software development for supporting SISP tasks.

  6. Proposing a Strategic Framework for Distributed Manufacturing Execution System Using Cloud Computing

    Shiva Khalili Gheidari

    2013-07-01

    Full Text Available This paper introduces a strategic framework that uses service-oriented architecture to design distributed MES over cloud. In this study, the main structure of framework is defined in terms of a series of modules that communicate with each other by use of a design pattern, called mediator. Framework focus is on the main module, which handles distributed orders with other ones and finally suggests the benefit of using cloud in comparison with previous architectures. The main structure of framework (mediator and the benefit of focusing on the main module by using cloud, should be pointed more, also the aim and the results of comparing this method with previous architecture whether by quality and quantity is not described.

  7. Acceleration of the matrix multiplication of Radiance three phase daylighting simulations with parallel computing on heterogeneous hardware of personal computer

    Zuo, Wangda [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wetter, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lee, Eleanor S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2013-05-23

    Building designers are increasingly relying on complex fenestration systems to reduce energy consumed for lighting and HVAC in low energy buildings. Radiance, a lighting simulation program, has been used to conduct daylighting simulations for complex fenestration systems. Depending on the configurations, the simulation can take hours or even days using a personal computer. This paper describes how to accelerate the matrix multiplication portion of a Radiance three-phase daylight simulation by conducting parallel computing on heterogeneous hardware of a personal computer. The algorithm was optimized and the computational part was implemented in parallel using OpenCL. The speed of new approach was evaluated using various daylighting simulation cases on a multicore central processing unit and a graphics processing unit. Based on the measurements and analysis of the time usage for the Radiance daylighting simulation, further speedups can be achieved by using fast I/O devices and storing the data in a binary format.

  8. Accelerate!

    Kotter, John P

    2012-11-01

    The old ways of setting and implementing strategy are failing us, writes the author of Leading Change, in part because we can no longer keep up with the pace of change. Organizational leaders are torn between trying to stay ahead of increasingly fierce competition and needing to deliver this year's results. Although traditional hierarchies and managerial processes--the components of a company's "operating system"--can meet the daily demands of running an enterprise, they are rarely equipped to identify important hazards quickly, formulate creative strategic initiatives nimbly, and implement them speedily. The solution Kotter offers is a second system--an agile, networklike structure--that operates in concert with the first to create a dual operating system. In such a system the hierarchy can hand off the pursuit of big strategic initiatives to the strategy network, freeing itself to focus on incremental changes to improve efficiency. The network is populated by employees from all levels of the organization, giving it organizational knowledge, relationships, credibility, and influence. It can Liberate information from silos with ease. It has a dynamic structure free of bureaucratic layers, permitting a level of individualism, creativity, and innovation beyond the reach of any hierarchy. The network's core is a guiding coalition that represents each level and department in the hierarchy, with a broad range of skills. Its drivers are members of a "volunteer army" who are energized by and committed to the coalition's vividly formulated, high-stakes vision and strategy. Kotter has helped eight organizations, public and private, build dual operating systems over the past three years. He predicts that such systems will lead to long-term success in the 21st century--for shareholders, customers, employees, and companies themselves. PMID:23155997

  9. Accelerating Astronomy & Astrophysics in the New Era of Parallel Computing: GPUs, Phi and Cloud Computing

    Ford, Eric B.; Dindar, Saleh; Peters, Jorg

    2015-08-01

    The realism of astrophysical simulations and statistical analyses of astronomical data are set by the available computational resources. Thus, astronomers and astrophysicists are constantly pushing the limits of computational capabilities. For decades, astronomers benefited from massive improvements in computational power that were driven primarily by increasing clock speeds and required relatively little attention to details of the computational hardware. For nearly a decade, increases in computational capabilities have come primarily from increasing the degree of parallelism, rather than increasing clock speeds. Further increases in computational capabilities will likely be led by many-core architectures such as Graphical Processing Units (GPUs) and Intel Xeon Phi. Successfully harnessing these new architectures, requires significantly more understanding of the hardware architecture, cache hierarchy, compiler capabilities and network network characteristics.I will provide an astronomer's overview of the opportunities and challenges provided by modern many-core architectures and elastic cloud computing. The primary goal is to help an astronomical audience understand what types of problems are likely to yield more than order of magnitude speed-ups and which problems are unlikely to parallelize sufficiently efficiently to be worth the development time and/or costs.I will draw on my experience leading a team in developing the Swarm-NG library for parallel integration of large ensembles of small n-body systems on GPUs, as well as several smaller software projects. I will share lessons learned from collaborating with computer scientists, including both technical and soft skills. Finally, I will discuss the challenges of training the next generation of astronomers to be proficient in this new era of high-performance computing, drawing on experience teaching a graduate class on High-Performance Scientific Computing for Astrophysics and organizing a 2014 advanced summer

  10. FINAL REPORT DE-FG02-04ER41317 Advanced Computation and Chaotic Dynamics for Beams and Accelerators

    Cary, John R [U. Colorado

    2014-09-08

    During the year ending in August 2013, we continued to investigate the potential of photonic crystal (PhC) materials for acceleration purposes. We worked to characterize acceleration ability of simple PhC accelerator structures, as well as to characterize PhC materials to determine whether current fabrication techniques can meet the needs of future accelerating structures. We have also continued to design and optimize PhC accelerator structures, with the ultimate goal of finding a new kind of accelerator structure that could offer significant advantages over current RF acceleration technology. This design and optimization of these requires high performance computation, and we continue to work on methods to make such computation faster and more efficient.

  11. Proceedings of the conference on computer codes and the linear accelerator community

    Cooper, R.K. (comp.)

    1990-07-01

    The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned.

  12. Proceedings of the conference on computer codes and the linear accelerator community

    The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned

  13. A Low-Power Scalable Stream Compute Accelerator for General Matrix Multiply (GEMM

    Antony Savich

    2014-01-01

    play an important role in determining the performance of such applications. This paper proposes a novel efficient, highly scalable hardware accelerator that is of equivalent performance to a 2 GHz quad core PC but can be used in low-power applications targeting embedded systems requiring high performance computation. Power, performance, and resource consumption are demonstrated on a fully-functional prototype. The proposed hardware accelerator is 36× more energy efficient per unit of computation compared to state-of-the-art Xeon processor of equal vintage and is 14× more efficient as a stand-alone platform with equivalent performance. An important comparison between simulated system estimates and real system performance is carried out.

  14. ACE3P Computations of Wakefield Coupling in the CLIC Two-Beam Accelerator

    Candel, Arno; Li, Z.; Ng, C.; Rawat, V.; Schussman, G.; Ko, K.; /SLAC; Syratchev, I.; Grudiev, A.; Wuensch, W.; /CERN

    2010-10-27

    The Compact Linear Collider (CLIC) provides a path to a multi-TeV accelerator to explore the energy frontier of High Energy Physics. Its novel two-beam accelerator concept envisions rf power transfer to the accelerating structures from a separate high-current decelerator beam line consisting of power extraction and transfer structures (PETS). It is critical to numerically verify the fundamental and higher-order mode properties in and between the two beam lines with high accuracy and confidence. To solve these large-scale problems, SLAC's parallel finite element electromagnetic code suite ACE3P is employed. Using curvilinear conformal meshes and higher-order finite element vector basis functions, unprecedented accuracy and computational efficiency are achieved, enabling high-fidelity modeling of complex detuned structures such as the CLIC TD24 accelerating structure. In this paper, time-domain simulations of wakefield coupling effects in the combined system of PETS and the TD24 structures are presented. The results will help to identify potential issues and provide new insights on the design, leading to further improvements on the novel CLIC two-beam accelerator scheme.

  15. A Low-Power Scalable Stream Compute Accelerator for General Matrix Multiply (GEMM)

    Antony Savich; Shawki Areibi

    2014-01-01

    Many applications ranging from machine learning, image processing, and machine vision to optimization utilize matrix multiplication as a fundamental block. Matrix operations play an important role in determining the performance of such applications. This paper proposes a novel efficient, highly scalable hardware accelerator that is of equivalent performance to a 2 GHz quad core PC but can be used in low-power applications targeting embedded systems requiring high performance computation. P...

  16. RACETRACK - a computer code for the simulation of nonlinear particle motion in accelerators

    RACETRACK is a computer code to simulate transverse nonlinear particle motion in accelerators. Transverse magnetic fields of higher order are treated in thin magnet approximation. Multipoles up to 20 poles are included. Energy oscillations due to the nonlinear synchrotron motion are taken into account. Several additional features, as linear optics calculations, chromaticity adjustment, tune variation, orbit adjustment and others are available to guarantee a fast treatment of nonlinear dynamical problems. (orig.)

  17. Multi-GPU Jacobian accelerated computing for soft-field tomography

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use finite element models (FEMs) to represent the volume of interest and solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are 3D. Although the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in electrical impedance tomography (EIT) applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15–20 min with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Furthermore, providing high-speed reconstructions is essential for some promising clinical application of EIT. For 3D problems, 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In this work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with the use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times

  18. Strategic Entrepreneurship

    Peter G. Klein; Jay B. Barney; Nicolai J. Foss

    2012-01-01

    Strategic entrepreneurship is a newly recognized field that draws, not surprisingly, from the fields of strategic management and entrepreneurship. The field emerged officially with the 2001 special issue of the Strategic Management Journal on “strategic entrepreneurship”; the first dedicated periodical, the Strategic Entrepreneurship Journal, appeared in 2007. Strategic entrepreneurship is built around two core ideas. (1) Strategy formulation and execution involves attributes that are fundame...

  19. Strategic Adaptation

    Andersen, Torben Juul

    2015-01-01

    This article provides an overview of theoretical contributions that have influenced the discourse around strategic adaptation including contingency perspectives, strategic fit reasoning, decision structure, information processing, corporate entrepreneurship, and strategy process. The related...... concepts of strategic renewal, dynamic managerial capabilities, dynamic capabilities, and strategic response capabilities are discussed and contextualized against strategic responsiveness. The insights derived from this article are used to outline the contours of a dynamic process of strategic adaptation...

  20. Gpu Accelerated Intensities: a New Method of Computing Einstein-A Coefficients

    Al-Refaie, Ahmed Faris; Yurchenko, Sergei N.; Tennyson, Jonathan

    2015-06-01

    The use of variational nuclear motion calculations to produce comprehensive molecular line lists is now becoming common. In order to produce high quality and complete line-lists in particular applicable to high temperatures requires large amounts of computational resources. The more accuracy required, the larger the problem and the more computational resources needed. The two main bottlenecks in the production of these line-lists are solving the eigenvalue problem and the computation of the Einstein-A coefficients. From the project's recently released line-lists, the number of transitions can reach up to 10 billion evaluated by the combination of millions of eigenvalues and eigenvectors corresponding to individual energy states. For line-lists of this size, the evaluation of Einstein-A coefficients take up the vast majority of computational time compared to solving the eigenvalue problem. Recently, as part of the ExoMol [1] project, we have developed a new program called GPU Accelerated INtensities (GAIN) that utilises the highly parallel Graphics Processing Units (GPU) in order to accelerate the evaluation of the Einstein-A coefficients. Speed-ups of up to 70x can be achieved on a single GPU and can be further improved by utilising multiple GPUs. The GPU hardware, its limitations and how the problem was implemented to exploit parallelism will be discussed. J.~Tennyson and S.~N. Yurchenko. ExoMol: molecular line lists for exoplanet and other atmospheres. MNRAS, 425:21--33, 2012.

  1. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-01-01

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate. PMID:27070606

  2. Strategic Implications for E-Business Organizations in the Ubiquitous Computing Economy

    YUM Jihwan; KIM Hyoungdo

    2004-01-01

    The ubiquitous economy brings both pros and cons for the organizations. The third space emerged by the development of ubiquitous computing generates new concept of community. The community is tightly coupled with people, products, and systems. Organizational strategies need to be reshaped for the changing environment in the third space and community. Organizational structure also needs to change for community serving organization. Community serving concept equipped with the standardized technology will be essential. One of the key technologies, RFID service will play a key role to acknowledge identification and services required. When the needs for sensing the environment increase,technological requirement such as the ubiquitous sensor network (USN) will be critically needed.

  3. LEADS: a graphically displayed computer program for linear and electrostatic accelerator beam dynamics simulation

    A computer program LEADS, written in about 6600 statements of MS FORTRAN 5.1 language, is suitable to run in IBM PC and other compatible computers. Program LEADS can make simulation for beam optical systems consisting of three-tube einzel lenses, three-aperture einzel lenses, two-tube lenses, uniform field DC accelerating tubes, magnetic and electrostatic quadrupoles, bending magnets, single-gap rf resonators, two-gap rf resonators (QWR) and three-gap rf resonators (SLR). Multi-particle tracking and matrix multiplication are used to calculate the beam transport. Monte Carlo techniques are adopted to generate randomly the initial particle coordinates in the phase spaces, and Powell nonlinear optimization routines are incorporated in the codes to search the given optical conditions. The calculated results can be displayed graphically on the computer monitors

  4. Accelerating Development of EV Batteries Through Computer-Aided Engineering (Presentation)

    Pesaran, A.; Kim, G. H.; Smith, K.; Santhanagopalan, S.

    2012-12-01

    The Department of Energy's Vehicle Technology Program has launched the Computer-Aided Engineering for Automotive Batteries (CAEBAT) project to work with national labs, industry and software venders to develop sophisticated software. As coordinator, NREL has teamed with a number of companies to help improve and accelerate battery design and production. This presentation provides an overview of CAEBAT, including its predictive computer simulation of Li-ion batteries known as the Multi-Scale Multi-Dimensional (MSMD) model framework. MSMD's modular, flexible architecture connects the physics of battery charge/discharge processes, thermal control, safety and reliability in a computationally efficient manner. This allows independent development of submodels at the cell and pack levels.

  5. Concepts and techniques: Active electronics and computers in safety-critical accelerator operation

    Frankel, R.S.

    1995-12-31

    The Relativistic Heavy Ion Collider (RHIC) under construction at Brookhaven National Laboratory, requires an extensive Access Control System to protect personnel from Radiation, Oxygen Deficiency and Electrical hazards. In addition, the complicated nature of operation of the Collider as part of a complex of other Accelerators necessitates the use of active electronic measurement circuitry to ensure compliance with established Operational Safety Limits. Solutions were devised which permit the use of modern computer and interconnections technology for Safety-Critical applications, while preserving and enhancing, tried and proven protection methods. In addition a set of Guidelines, regarding required performance for Accelerator Safety Systems and a Handbook of design criteria and rules were developed to assist future system designers and to provide a framework for internal review and regulation.

  6. Computational Materials Science and Chemistry: Accelerating Discovery and Innovation through Simulation-Based Engineering and Science

    Crabtree, George [Argonne National Lab. (ANL), Argonne, IL (United States); Glotzer, Sharon [University of Michigan; McCurdy, Bill [University of California Davis; Roberto, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2010-07-26

    This report is based on a SC Workshop on Computational Materials Science and Chemistry for Innovation on July 26-27, 2010, to assess the potential of state-of-the-art computer simulations to accelerate understanding and discovery in materials science and chemistry, with a focus on potential impacts in energy technologies and innovation. The urgent demand for new energy technologies has greatly exceeded the capabilities of today's materials and chemical processes. To convert sunlight to fuel, efficiently store energy, or enable a new generation of energy production and utilization technologies requires the development of new materials and processes of unprecedented functionality and performance. New materials and processes are critical pacing elements for progress in advanced energy systems and virtually all industrial technologies. Over the past two decades, the United States has developed and deployed the world's most powerful collection of tools for the synthesis, processing, characterization, and simulation and modeling of materials and chemical systems at the nanoscale, dimensions of a few atoms to a few hundred atoms across. These tools, which include world-leading x-ray and neutron sources, nanoscale science facilities, and high-performance computers, provide an unprecedented view of the atomic-scale structure and dynamics of materials and the molecular-scale basis of chemical processes. For the first time in history, we are able to synthesize, characterize, and model materials and chemical behavior at the length scale where this behavior is controlled. This ability is transformational for the discovery process and, as a result, confers a significant competitive advantage. Perhaps the most spectacular increase in capability has been demonstrated in high performance computing. Over the past decade, computational power has increased by a factor of a million due to advances in hardware and software. This rate of improvement, which shows no sign of

  7. Computer simulation of 2-D and 3-D ion beam extraction and acceleration

    Ido, Shunji; Nakajima, Yuji [Saitama Univ., Urawa (Japan). Faculty of Engineering

    1997-03-01

    The two-dimensional code and the three-dimensional code have been developed to study the physical features of the ion beams in the extraction and acceleration stages. By using the two-dimensional code, the design of first electrode(plasma grid) is examined in regard to the beam divergence. In the computational studies by using the three-dimensional code, the axis-off model of ion beam is investigated. It is found that the deflection angle of ion beam is proportional to the gap displacement of the electrodes. (author)

  8. A new 3-D integral code for computation of accelerator magnets

    For computing accelerator magnets, integral codes have several advantages over finite element codes; far-field boundaries are treated automatically, and computed fields in the bore region satisfy Maxwell's equations exactly. A new integral code employing the edge elements rather than nodal elements has overcome the difficulties associated with earlier integral codes. By the use of field integrals (potential differences) as solution variables, the number of unknowns is reduced to one less than the number of nodes. Two examples, a hollow iron sphere and the dipole magnet of Advanced Photon source injector synchrotron, show the capability of the code. The CPU time requirements are comparable to those of three-dimensional (3-D) finite-element codes. Experiments show that in practice it can realize much of the potential CPU time saving that parallel processing makes possible

  9. A computational study of dielectric photonic-crystal-based accelerator cavities

    Bauer, C. A.

    Future particle accelerator cavities may use dielectric photonic crystals to reduce harmful wakefields and increase the accelerating electric field (or gradient). Reduced wakefields are predicted based on the bandgap property of some photonic crystals (i.e. frequency-selective reflection/transmission). Larger accelerating gradients are predicted based on certain dielectrics' strong resistance to electrical breakdown. Using computation, this thesis investigated a hybrid design of a 2D sapphire photonic crystal and traditional copper conducting cavity. The goals were to test the claim of reduced wakefields and, in general, judge the effectiveness of such structures as practical accelerating cavities. In the process, we discovered the following: (1) resonant cavities in truncated photonic crystals may confine radiation weakly compared to conducting cavities (depending on the level of truncation); however, confinement can be dramatically increased through optimizations that break lattice symmetry (but retain certain rotational symmetries); (2) photonic crystal cavities do not ideally reduce wakefields; using band structure calculations, we found that wakefields are increased by flat portions of the frequency dispersion (where the waves have vanishing group velocities). A complete comparison was drawn between the proposed photonic crystal cavities and the copper cavities for the Compact Linear Collider (CLIC); CLIC is one of the candidates for a future high-energy electron-positron collider that will study in greater detail the physics learned at the Large Hadron Collider. We found that the photonic crystal cavity, when compared to the CLIC cavity: (1) can lower maximum surface magnetic fields on conductors (growing evidence suggests this limits accelerating gradients by inducing electrical breakdown); (2) shows increased transverse dipole wakefields but decreased longitudinal monopole wakefields; and (3) exhibits lower accelerating efficiencies (unless a large photonic

  10. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.

    Ye Fang

    Full Text Available Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU. First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.

  11. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.

    Fang, Ye; Ding, Yun; Feinstein, Wei P; Koppelman, David M; Moreno, Juana; Jarrell, Mark; Ramanujam, J; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249. PMID:27420300

  12. The Challenge and Promise of the New Research Tools: Solid-State Detectors, Computers and Accelerators

    Nuclear structure research has been the beneficiary of many recent technical advances and in turn directly stimulated some advances. The exciting possibilities of the lithium- drifted germanium detector in gamma spectroscopy have only begun to be realized. The large amounts of high resolution data have put a premium on developing more automation of data handling through more sophisticated electronics and computers. Trends in accelerators are discussed, and reference tables listing isochronous cyclotrons and Tandem Van de Graafs around the world are given. Attention is directed to new frontiers of research in heavier ion accelerators, and tables of characteristics of existing and proposed heavy ion accelerators are given. The difficulties of obtaining multiply charged ions from known types of ion sources are considered, and the high charges resulting from Auger cascades of a К-vacancy are noted. It is suggested that intensive research on decay schemes and charge states of recoil products of nuclear reactions could lead to a practical accelerator of very heavy ions. As an example is discussed a possible arrangement in a Tandem Van de Graaf, where a deuteron negative ion beam strikes a source foil in the positive terminal, with recoil products or fission products accelerated to ground. Studies on noble gas and halogen fission products by gas transport ystems and isotope separators are noted. Also reviewed are germanium gamma studies on unseparated 252Cf spontaneous fission products using tape-transport methods and К X-ray coincidence. Next are reviewed studies on gamma and conversion-electron spectra of recoils and fission products in flight. The use of solenoidal or fringing-field magnets for conversion electron studies is discussed. Some of the qualitatively new aspects of nuclear studies with very heavy ion beams are mentioned. Finally, it is stressed that the research here called for on gamma cascades and charge states of nuclear reaction products is most

  13. Acceleration of Hessenberg Reduction for Nonsymmetric Eigenvalue Problems in a Hybrid CPU-GPU Computing Environment

    Kinji Kimura

    2011-07-01

    Full Text Available Solution of large-scale dense nonsymmetric eigenvalue problem is required in many areas of scientific and engineering computing, such as vibration analysis of automobiles and analysis of electronic diffraction patterns. In this study, we focus on the Hessenberg reduction step and consider accelerating it in a hybrid CPU-GPU computing environment. Considering that the Hessenberg reduction algorithm consists almost entirely of BLAS (Basic Linear Algebra Subprograms operations, we propose three approaches for distributing the BLAS operations between CPU and GPU. Among them, the third approach, which assigns small-size BLAS operations to CPU and distributes large-size BLAS operations between CPU and GPU in some optimal manner, was found to be consistently faster than the other two approaches. On a machine with an Intel Core i7 processor and an NVIDIA Tesla C1060 GPU, this approach achieved 3.2 times speedup over the CPU-only case when computing the Hessenberg form of a 8,192×8,192 real matrix.

  14. Strategic Leadership

    Davies, Barbara; Davies, Brent

    2004-01-01

    This article explores the nature of strategic leadership and assesses whether a framework can be established to map the dimensions of strategic leadership. In particular it establishes a model which outlines both the organizational abilities and the individual characteristics of strategic leaders.

  15. Plasma accelerators

    Recently attention has focused on charged particle acceleration in a plasma by a fast, large amplitude, longitudinal electron plasma wave. The plasma beat wave and plasma wakefield accelerators are two efficient ways of producing ultra-high accelerating gradients. Starting with the plasma beat wave accelerator (PBWA) and laser wakefield accelerator (LWFA) schemes and the plasma wakefield accelerator (PWFA) steady progress has been made in theory, simulations and experiments. Computations are presented for the study of LWFA. (author)

  16. Distribution of computer functionality for accelerator control at the Brookhaven AGS

    A set of physical and functional system components and their interconnection protocols have been established for all controls work at the AGS. Portions of these designs were tested as part of enhanced operation of the AGS as a source of polarized protons and additional segments will be implemented during the continuing construction efforts which are adding heavy ion capability to our facility. Included in our efforts are the following computer and control system elements: a broad band local area network, which embodies MODEMS; transmission systems and branch interface units; a hierarchical layer, which performs certain data base and watchdog/alarm functions; a group of work station processors (Apollo's) which perform the function of traditional minicomputer host(s) and a layer, which provides both real time control and standardization functions for accelerator devices and instrumentation. Data base and other accelerator functionality is assigned to the most correct level within our network for both real time performance, long-term utility, and orderly growth

  17. Computer automation of beam steering systems for the McMaster University Tandem Van De Graaff Accelerator

    A prototype computer control system has been added to the McMaster University Tandem Accelerator Laboratory's Model FN Van de Graaff. using a PDP-11/23 computer, the two-dimensional electrostatic low-energy steerers are controlled in such a manner as to optimize energy analyzed beam intensity when initiated by accelerator operation command. The system has been successfully tried on a wide mass range of ion species and performs the operation in under five seconds. Another operating mode allows continuous maximization of beam intensity. This is useful as an operator's ''third hand'' while other parameters of the beam transport system are varied manually. This system is part of an ongoing program of computer automation of suitable accelerator subsystems within the Laboratory

  18. An improved coarse-grained parallel algorithm for computational acceleration of ordinary Kriging interpolation

    Hu, Hongda; Shu, Hong

    2015-05-01

    Heavy computation limits the use of Kriging interpolation methods in many real-time applications, especially with the ever-increasing problem size. Many researchers have realized that parallel processing techniques are critical to fully exploit computational resources and feasibly solve computation-intensive problems like Kriging. Much research has addressed the parallelization of traditional approach to Kriging, but this computation-intensive procedure may not be suitable for high-resolution interpolation of spatial data. On the basis of a more effective serial approach, we propose an improved coarse-grained parallel algorithm to accelerate ordinary Kriging interpolation. In particular, the interpolation task of each unobserved point is considered as a basic parallel unit. To reduce time complexity and memory consumption, the large right hand side matrix in the Kriging linear system is transformed and fixed at only two columns and therefore no longer directly relevant to the number of unobserved points. The MPI (Message Passing Interface) model is employed to implement our parallel programs in a homogeneous distributed memory system. Experimentally, the improved parallel algorithm performs better than the traditional one in spatial interpolation of annual average precipitation in Victoria, Australia. For example, when the number of processors is 24, the improved algorithm keeps speed-up at 20.8 while the speed-up of the traditional algorithm only reaches 9.3. Likewise, the weak scaling efficiency of the improved algorithm is nearly 90% while that of the traditional algorithm almost drops to 40% with 16 processors. Experimental results also demonstrate that the performance of the improved algorithm is enhanced by increasing the problem size.

  19. Strategic Entrepreneurship

    Klein, Peter G.; Barney, Jay B.; Foss, Nicolai Juul

    Strategic entrepreneurship is a newly recognized field that draws, not surprisingly, from the fields of strategic management and entrepreneurship. The field emerged officially with the 2001 special issue of the Strategic Management Journal on “strategic entrepreneurship”; the first dedicated...... periodical, the Strategic Entrepreneurship Journal, appeared in 2007. Strategic entrepreneurship is built around two core ideas. (1) Strategy formulation and execution involves attributes that are fundamentally entrepreneurial, such as alertness, creativity, and judgment, and entrepreneurs try to create...... and capture value through resource acquisition and competitive posi-tioning. (2) Opportunity-seeking and advantage-seeking—the former the central subject of the entrepreneurship field, the latter the central subject of the strategic management field—are pro-cesses that should be considered jointly. This entry...

  20. Accelerating Design of Batteries Using Computer-Aided Engineering Tools (Presentation)

    Pesaran, A.; Kim, G. H.; Smith, K.

    2010-11-01

    Computer-aided engineering (CAE) is a proven pathway, especially in the automotive industry, to improve performance by resolving the relevant physics in complex systems, shortening the product development design cycle, thus reducing cost, and providing an efficient way to evaluate parameters for robust designs. Academic models include the relevant physics details, but neglect engineering complexities. Industry models include the relevant macroscopic geometry and system conditions, but simplify the fundamental physics too much. Most of the CAE battery tools for in-house use are custom model codes and require expert users. There is a need to make these battery modeling and design tools more accessible to end users such as battery developers, pack integrators, and vehicle makers. Developing integrated and physics-based CAE battery tools can reduce the design, build, test, break, re-design, re-build, and re-test cycle and help lower costs. NREL has been involved in developing various models to predict the thermal and electrochemical performance of large-format cells and has used in commercial three-dimensional finite-element analysis and computational fluid dynamics to study battery pack thermal issues. These NREL cell and pack design tools can be integrated to help support the automotive industry and to accelerate battery design.

  1. Subcritical set coupled to accelerator (ADS) for transmutation of radioactive wastes: an approach of computational modelling

    Nuclear fission devices coupled to particle accelerators ADS are being widely studied. These devices have several applications, including nuclear waste transmutation and producing hydrogen, both applications with strong social and environmental impact. The essence of this work was to model an ADS geometry composed of small TRISO fuel loaded with a mixture of MOX uranium and thorium target material spallation of uranium, using methods of computational modeling probabilistic, in particular the MCNPX 2.6e program to evaluate the physical characteristics of the device and their ability to transmutation. As a result of the characterization of the spallation target, it can be concluded that production of neutrons per incident proton increases with increasing dimensions of the spallation target (thickness and radius), until it reached the maximum production of neutrons per incident proton or call the region saturation. The results obtained in modeling the ADS device bed kind of balls with respect to isotopic variation in the isotopes of plutonium and minor actinides considered in the analysis revealed that accumulation of mass of the isotopes of plutonium and minor actinides increase for subcritical configuration considered. In the particular case of the isotope 239Pu, it is observed a reduction of the mass from the time of burning of 99 days. The increase of power in the core, whereas tungsten spallation targets and Lead is among the key future developments of this work

  2. Accelerating the Gauss-Seidel Power Flow Solver on a High Performance Reconfigurable Computer

    Byun, Jong-Ho; Ravindran, Arun; Mukherjee, Arindam; Joshi, Bharat; Chassin, David P.

    2009-09-01

    The computationally intensive power flow problem determines the voltage magnitude and phase angle at each bus in a power system for hundreds of thousands of buses under balanced three-phase steady-state conditions. We report an FPGA acceleration of the Gauss-Seidel based power flow solver employed in the transmission module of the GridLAB-D power distribution simulator and analysis tool. The prototype hardware is implemented on an SGI Altix-RASC system equipped with a Xilinx Virtex II 6000 FPGA. Due to capacity limitations of the FPGA, only the bus voltage calculations of the power network are implemented on hardware while the branch current calculations are implemented in software. For a 200,000 bus system, the bus voltage calculation on the FPGA achieves a 48x speed-up with PQ buses and a 62 times for PV over an equivalent sequential software implementation. The average overall speed up of the FPGA-CPU implementation with 100 iterations of the Gauss-Seidel power solver is 2.6x over a software implementation, with the branch calculations on the CPU accounting for 85% of the total execution time. The FPGA-CPU implementation also shows linear scaling with increase in the size of the input power network.

  3. Definition of the loading of process digital computer, used in the same class of accelerator control systems

    A relationship has been studied between computer loading on the one part and the properties of the parameter under control and discrete interval value on the other part. The computer loading is characterized by an inquiry probability value per calculation of the correcting signal. A mathematic expressing has been obtained which determined the inquiry probability. The expression is a multidimensional integral. The Monte-Carlo method has been employed for computation of the integral. A structural diagram of the algorithm is presented which elaborates the method so as to compute the probability. An error of the method has been assessed. The algorithm has been employed on the M-220 computer. The results obtained confirm correctness of the suggested methods of determination of the computer loading for operation in the accelerator control system

  4. Approach to the open advanced facilities initiative for innovation (strategic use by industry) at the University of Tsukuba, Tandem Accelerator Complex

    The University of Tsukuba, Tandem Accelerator Complex (UTTAC) possesses the 12UD Pelletron tandem accelerator and the 1 MV Tandetron accelerator for University's inter-department education research. We have actively advanced collaborative researches with other research institutes and industrial users. Since the Open Advanced Facilities Initiative for Innovation by the Ministry of Education, Culture, Sports, Science and Technology started in 2007, 12 industrial experiments have been carried out at the UTTAC. This report describes efforts by University's accelerator facility to get industrial users. (author)

  5. BROHR and SYSFIT - a system of computer codes for the calculation of the beam tansport at electrostatic accelerators

    The computer codes BROHR and SYSFIT are presented. Both codes are based on the first-order matrix formalism of ion optics. By means of the code BROHR the trajectories of ions and electrons inside of any inclined field accelerating tubes can be calculated. The influence of the stripping process at tandem accelerators is included by changing of the mass and the charge of the ions and by increasing the beam emittance. The code SYSFIT is used for calculation of any beam transport systems and of the transported beam. Special requested imaging properties can be realized by parameter variation. Calculated examples are given for both codes. (author)

  6. Specific features of planning algorithms for dispatching software of a digital computer, operating in an accelerator control system

    The main principles are presented of a program dispatching system (DS) of the computer operating in the control and data acquisition system of the accelerator. The DS is intended for planning and execution of operating program sequence in accordance with the operational features of the accelerator. Modularity and hierarchy principles are the main characteristics of the system. The ''Planner'' module is described. The module regulates inquiries for utilization of the processor and memory and ensures their service. The ''Planner'' operation algorithm provides a simultaneous execution of programs with the processes occurring in the accelerator. The ''Planner'' planning algorithm controls presence of the programs requested and ensures their execution under multiprogram conditions. Brief characteristics of other modules of the DS are given. They are the ''distributor'', ''loader'', and ''interrupter''. Characteristics of the planning algorithms described have been realized in the DS and found in full agreement with all the conditions and limitations of the system

  7. Rigorous bounds on survival times in circular accelerators and efficient computation of fringe-field transfer maps

    Analyzing stability of particle motion in storage rings contributes to the general field of stability analysis in weakly nonlinear motion. A method which we call pseudo invariant estimation (PIE) is used to compute lower bounds on the survival time in circular accelerators. The pseudeo invariants needed for this approach are computed via nonlinear perturbative normal form theory and the required global maxima of the highly complicated multivariate functions could only be rigorously bound with an extension of interval arithmetic. The bounds on the survival times are large enough to the relevant; the same is true for the lower bounds on dynamical aperatures, which can be computed. The PIE method can lead to novel design criteria with the objective of maximizing the survival time. A major effort in the direction of rigourous predictions only makes sense if accurate models of accelerators are available. Fringe fields often have a significant influence on optical properties, but the computation of fringe-field maps by DA based integration is slower by several orders of magnitude than DA evaluation of the propagator for main-field maps. A novel computation of fringe-field effects called symplectic scaling (SYSCA) is introduced. It exploits the advantages of Lie transformations, generating functions, and scaling properties and is extremely accurate. The computation of fringe-field maps is typically made nearly two orders of magnitude faster. (orig.)

  8. ISLAM PROJECT: Interface between the signals from various experiments of a Van Graaff accelerator and PDP 11/44 computer

    This paper describe an interface between the signals from an in-beam experiment of a Van de Graaff accelerator and a PDP 11/44 computer. The information corresponding to one spectrum is taken from one digital voltammeter and is processed by mean of an equipment controlled by a M6809 microprocessor. The software package has been developed in assembly language and has a size of 1/2 K. (Author) 12 refs

  9. High Performance Computer Acoustic Data Accelerator: A New System for Exploring Marine Mammal Acoustics for Big Data Applications

    Dugan, Peter; Zollweg, John; Marian POPESCU; Risch, Denise; Glotin, Herve; LeCun, Yann; Clark, and Christopher

    2015-01-01

    This paper presents a new software model designed for distributed sonic signal detection runtime using machine learning algorithms called DeLMA. A new algorithm--Acoustic Data-mining Accelerator (ADA)--is also presented. ADA is a robust yet scalable solution for efficiently processing big sound archives using distributing computing technologies. Together, DeLMA and the ADA algorithm provide a powerful tool currently being used by the Bioacoustics Research Program (BRP) at the Cornell Lab of O...

  10. Strategic Management

    Vančata, Jan

    2012-01-01

    Strategic management is a process in which managers determine long-lasting direction of the company, set specific targets of performance and develop appropriate strategies to achieve objectives considering all reasonable internal and external factors of the company. Also make concrete steps toward realization through the selected plan. Why is the strategic management important in the company? The answer is quite simple. It assigns a specific role to each person, it leads towards the differenc...

  11. Acceleration of color computer-generated hologram from three-dimensional scenes with texture and depth information

    Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi

    2014-06-01

    We propose acceleration of color computer-generated holograms (CGHs) from three-dimensional (3D) scenes that are expressed as texture (RGB) and depth (D) images. These images are obtained by 3D graphics libraries and RGB-D cameras: for example, OpenGL and Kinect, respectively. We can regard them as two-dimensional (2D) cross-sectional images along the depth direction. The generation of CGHs from the 2D cross-sectional images requires multiple diffraction calculations. If we use convolution-based diffraction such as the angular spectrum method, the diffraction calculation takes a long time and requires large memory usage because the convolution diffraction calculation requires the expansion of the 2D cross-sectional images to avoid the wraparound noise. In this paper, we first describe the acceleration of the diffraction calculation using "Band-limited double-step Fresnel diffraction," which does not require the expansion. Next, we describe color CGH acceleration using color space conversion. In general, color CGHs are generated on RGB color space; however, we need to repeat the same calculation for each color component, so that the computational burden of the color CGH generation increases three-fold, compared with monochrome CGH generation. We can reduce the computational burden by using YCbCr color space because the 2D cross-sectional images on YCbCr color space can be down-sampled without the impairing of the image quality.

  12. Comparison of acceleration and impact stress as possible loading factors in phonation: a computer modeling study.

    Horácek, Jaromír; Laukkanen, Anne-Maria; Sidlof, Petr; Murphy, Peter; Svec, Jan G

    2009-01-01

    Impact stress (the impact force divided by the contact area of the vocal folds) has been suspected to be the main traumatizing mechanism in voice production, and the main cause of vocal fold nodules. However, there are also other factors, such as the repetitive acceleration and deceleration, which may traumatize the vocal fold tissues. Using an aeroelastic model of voice production, the present study quantifies the acceleration and impact stress values in relation to lung pressure, fundamental frequency (F0) and prephonatory glottal half-width. Both impact stress and acceleration were found to increase with lung pressure. Compared to impact stress, acceleration was less dependent on prephonatory glottal width and, thus, on voice production type. Maximum acceleration values were about 5-10 times greater for high F0 (approx. 400 Hz) compared to low F0 (approx. 100 Hz), whereas maximum impact stress remained nearly unchanged. This suggests that acceleration, i.e. the inertia forces, may present at high F0 a greater load for the vocal folds, and in addition to the collision forces may contribute to the fact that females develop vocal fold nodules and other vocal fold traumas more frequently than males. PMID:19571548

  13. Strategic analysis

    Chládek, Vítězslav

    2012-01-01

    The objective of this Bachelor thesis is to carry out a strategic analysis of a Czech owned limited company, Česky národní podnik s.r.o. This company sells traditional Czech products and manufactures cosmetics and body care products. The first part of the thesis provides theoretical background and methodology that are used later for the strategic analysis of the company. The theory outlined in this paper is based on the analysis of external and internal factors. Firstly the PEST analysis has ...

  14. Strategic analysis

    Bartuňková, Alena

    2008-01-01

    The objective of this Bachelor thesis is to carry out a strategic analysis of a Czech owned limited company, Česky národní podnik s.r.o. This company sells traditional Czech products and manufactures cosmetics and body care products. The first part of the thesis provides theoretical background and methodology that are used later for the strategic analysis of the company. The theory outlined in this paper is based on the analysis of external and internal factors. Firstly the PEST analysis has ...

  15. Electron Accelerator Facilities

    Lecture presents main aspects of progress in development of industrial accelerators: adaptation of accelerators primary built for scientific experiments, electron energy and beam power increase in certain accelerator constructions, computer control system managing accelerator start-up, routine operation and technological process, maintenance (diagnostics), accelerator technology perfection (electrical efficiency, operation cost), compact and more efficient accelerator constructions, reliability improvement according to industrial standards, accelerators for MW power levels and accelerators tailored for specific use

  16. Strategic development

    Corrall, Sheila

    2009-01-01

    Discusses the education, training and development of library and information workers in relation to strategic management, providing examples of professional education, workplace learning, short courses and extended programmes in the field. Defines the field and identifies its key elements, then explains their significance and suggest methods of development, illustrated by examples from practice. Covers strategy tools, organisational structure, organisational culture, managing change and perfo...

  17. Strategic Responsiveness

    Pedersen, Carsten; Juul Andersen, Torben

    The analysis of major resource committing decisions is central focus in the strategy field, but despite decades of rich conceptual and empirical research we still seem distant from a level of understanding that can guide corporate practices under dynamic and unpredictable conditions. Strategic de...

  18. Collective Tuning Initiative: automating and accelerating development and optimization of computing systems

    Fursin, Grigori

    2009-01-01

    International audience Computing systems rarely deliver best possible performance due to ever increasing hardware and software complexity and limitations of the current optimization technology. Additional code and architecture optimizations are often required to improve execution time, size, power consumption, reliability and other important characteristics of computing systems. However, it is often a tedious, repetitive, isolated and time consuming process. In order to automate, simplify ...

  19. Intro - High Performance Computing for 2015 HPC Annual Report

    Klitsner, Tom [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    The recent Executive Order creating the National Strategic Computing Initiative (NSCI) recognizes the value of high performance computing for economic competitiveness and scientific discovery and commits to accelerate delivery of exascale computing. The HPC programs at Sandia –the NNSA ASC program and Sandia’s Institutional HPC Program– are focused on ensuring that Sandia has the resources necessary to deliver computation in the national interest.

  20. Computation of thermal properties via 3D homogenization of multiphase materials using FFT-based accelerated scheme

    Lemaitre, Sophie; Choi, Daniel; Karamian, Philippe

    2015-01-01

    In this paper we study the thermal effective behaviour for 3D multiphase composite material consisting of three isotropic phases which are the matrix, the inclusions and the coating media. For this purpose we use an accelerated FFT-based scheme initially proposed in Eyre and Milton (1999) to evaluate the thermal conductivity tensor. Matrix and spherical inclusions media are polymers with similar properties whereas the coating medium is metallic hence better conducting. Thus, the contrast between the coating and the others media is very large. For our study, we use RVEs (Representative volume elements) generated by RSA (Random Sequential Adsorption) method developed in our previous works, then, we compute effective thermal properties using an FFT-based homogenization technique validated by comparison with the direct finite elements method. We study the thermal behaviour of the 3D-multiphase composite material and we show what features should be taken into account to make the computational approach efficient.

  1. Large full band gaps for photonic crystals in two dimensions computed by an inverse method with multigrid acceleration

    Chern, R. L.; Chang, C. Chung; Chang, Chien C.; Hwang, R. R.

    2003-08-01

    In this study, two fast and accurate methods of inverse iteration with multigrid acceleration are developed to compute band structures of photonic crystals of general shape. In particular, we report two-dimensional photonic crystals of silicon air with an optimal full band gap of gap-midgap ratio Δω/ωmid=0.2421, which is 30% larger than ever reported in the literature. The crystals consist of a hexagonal array of circular columns, each connected to its nearest neighbors by slender rectangular rods. A systematic study with respect to the geometric parameters of the photonic crystals was made possible with the present method in drawing a three-dimensional band-gap diagram with reasonable computing time.

  2. Study of irradiation induced restructuring of high burnup fuel - Use of computer and accelerator for fuel science and engineering -

    In order to develop advanced fuel for future LWR reactors, trials were made to simulate the high burnup restructuring of the ceramics fuel, using accelerator irradiation out of pile and with computer simulation. The target is to reproduce the principal complex process as a whole. The reproduction of the grain subdivision (sub grain formation) was successful at experiments with sequential combined irradiation. It was made by recovery process of the accumulated dislocations, making cells and sub-boundaries at grain boundaries and pore surfaces. Details of the grain sub division mechanism is now in front of us outside of the reactor. Extensive computer science studies, first principle and molecular dynamics gave behavior of fission gas atoms and interstitial oxygen, assisting the high burnup restructuring

  3. Strategic patenting and software innovation

    Noel, Michael; Schankerman, Mark

    2013-01-01

    Strategic patenting is widely believed to raise the costs of innovating, especially in industries characterised by cumulative innovation. This paper studies the effects of strategic patenting on R&D, patenting and market value in the computer software industry. We focus on two key aspects: patent portfolio size, which affects bargaining power in patent disputes, and the fragmentation of patent rights (‘patent thickets’) which increases the transaction costs of enforcement. We develop a model ...

  4. Computer-controlled back scattering and sputtering-experiment using a heavy-ion-accelerator

    Control and data acquisition of a PDP 11/40 computer and CAMAC instrumentation are reported for an experiment that has been developed to measure sputtering in yields and energy losses for heavy 100 - 300 keV ions in thin metal foils. Besides a quadrupole mass filter or a bending magnet, a multichannel analyser is coupled to the computer, so that also pulse height analysis can be performed under computer control. CAMAC instrumentation and measuring programs are built in a modular form to enable an easy application to other experimental problems. (orig.) 891 KBE/orig. 892 BRE

  5. Advanced quadrature sets and acceleration and preconditioning techniques for the discrete ordinates method in parallel computing environments

    Longoni, Gianluca

    In the nuclear science and engineering field, radiation transport calculations play a key-role in the design and optimization of nuclear devices. The linear Boltzmann equation describes the angular, energy and spatial variations of the particle or radiation distribution. The discrete ordinates method (S N) is the most widely used technique for solving the linear Boltzmann equation. However, for realistic problems, the memory and computing time require the use of supercomputers. This research is devoted to the development of new formulations for the SN method, especially for highly angular dependent problems, in parallel environments. The present research work addresses two main issues affecting the accuracy and performance of SN transport theory methods: quadrature sets and acceleration techniques. New advanced quadrature techniques which allow for large numbers of angles with a capability for local angular refinement have been developed. These techniques have been integrated into the 3-D SN PENTRAN (Parallel Environment Neutral-particle TRANsport) code and applied to highly angular dependent problems, such as CT-Scan devices, that are widely used to obtain detailed 3-D images for industrial/medical applications. In addition, the accurate simulation of core physics and shielding problems with strong heterogeneities and transport effects requires the numerical solution of the transport equation. In general, the convergence rate of the solution methods for the transport equation is reduced for large problems with optically thick regions and scattering ratios approaching unity. To remedy this situation, new acceleration algorithms based on the Even-Parity Simplified SN (EP-SSN) method have been developed. A new stand-alone code system, PENSSn (Parallel Environment Neutral-particle Simplified SN), has been developed based on the EP-SSN method. The code is designed for parallel computing environments with spatial, angular and hybrid (spatial/angular) domain

  6. Accelerated time-of-flight (TOF) PET image reconstruction using TOF bin subsetization and TOF weighting matrix pre-computation

    Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib

    2016-02-01

    FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.

  7. Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment

    Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.

    2013-12-01

    Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a

  8. Strategic Classification

    Hardt, Moritz; Megiddo, Nimrod; Papadimitriou, Christos; Wootters, Mary

    2015-01-01

    Machine learning relies on the assumption that unseen test instances of a classification problem follow the same distribution as observed training data. However, this principle can break down when machine learning is used to make important decisions about the welfare (employment, education, health) of strategic individuals. Knowing information about the classifier, such individuals may manipulate their attributes in order to obtain a better classification outcome. As a result of this behavior...

  9. Strategic Marketing

    Potter, Ned

    2012-01-01

    This chapter from The Library Marketing Toolkit focuses on marketing strategy. Marketing is more successful when it happens as part of a constantly-renewing cycle. The aim of this chapter is to demystify the process of strategic marketing, simplifying it into seven key stages with advice on how to implement each one. Particular emphasis is put on dividing your audience and potential audience into segments, and marketing different messages to each group. It includes case studies from Terr...

  10. Accelerating selected columns of the density matrix computations via approximate column selection

    Damle, Anil; Ying, Lexing

    2016-01-01

    Localized representation of the Kohn-Sham subspace plays an important role in quantum chemistry and materials science. The recently developed selected columns of the density matrix (SCDM) method [J. Chem. Theory Comput. 11, 1463, 2015] is a simple and robust procedure for finding a localized representation of a set of Kohn-Sham orbitals from an insulating system. The SCDM method allows the direct construction of a well conditioned (or even orthonormal) and localized basis for the Kohn-Sham subspace. The SCDM procedure avoids the use of an optimization procedure and does not depend on any adjustable parameters. The most computationally expensive step of the SCDM method is a column pivoted QR factorization that identifies the important columns for constructing the localized basis set. In this paper, we develop a two stage approximate column selection strategy to find the important columns at much lower computational cost. We demonstrate the effectiveness of this process using a dissociation process of a BH$_{3}...

  11. PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)

    Vincenti, Henri

    2016-03-01

    The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.

  12. LCODE: a parallel quasistatic code for computationally heavy problems of plasma wakefield acceleration

    Sosedkin, Alexander

    2015-01-01

    LCODE is a freely-distributed quasistatic 2D3V code for simulating plasma wakefield acceleration, mainly specialized at resource-efficient studies of long-term propagation of ultrarelativistic particle beams in plasmas. The beam is modeled with fully relativistic macro-particles in a simulation window copropagating with the light velocity; the plasma can be simulated with either kinetic or fluid model. Several techniques are used to obtain exceptional numerical stability and precision while maintaining high resource efficiency, enabling LCODE to simulate the evolution of long particle beams over long propagation distances even on a laptop. A recent upgrade enabled LCODE to perform the calculations in parallel. A pipeline of several LCODE processes communicating via MPI (Message-Passing Interface) is capable of executing multiple consecutive time steps of the simulation in a single pass. This approach can speed up the calculations by hundreds of times.

  13. LCODE: A parallel quasistatic code for computationally heavy problems of plasma wakefield acceleration

    Sosedkin, A. P.; Lotov, K. V.

    2016-09-01

    LCODE is a freely distributed quasistatic 2D3V code for simulating plasma wakefield acceleration, mainly specialized at resource-efficient studies of long-term propagation of ultrarelativistic particle beams in plasmas. The beam is modeled with fully relativistic macro-particles in a simulation window copropagating with the light velocity; the plasma can be simulated with either kinetic or fluid model. Several techniques are used to obtain exceptional numerical stability and precision while maintaining high resource efficiency, enabling LCODE to simulate the evolution of long particle beams over long propagation distances even on a laptop. A recent upgrade enabled LCODE to perform the calculations in parallel. A pipeline of several LCODE processes communicating via MPI (Message-Passing Interface) is capable of executing multiple consecutive time steps of the simulation in a single pass. This approach can speed up the calculations by hundreds of times.

  14. ActiWiz – optimizing your nuclide inventory at proton accelerators with a computer code

    Vincke, Helmut

    2014-01-01

    When operating an accelerator one always faces unwanted, but inevitable beam losses. These result in activation of adjacent material, which in turn has an obvious impact on safety and handling constraints. One of the key parameters responsible for activation is the chemical composition of the material which often can be optimized in that respect. In order to facilitate this task also for non-expert users the ActiWiz software has been developed at CERN. Based on a large amount of generic FLUKA Monte Carlo simulations the software applies a specifically developed risk assessment model to provide support to decision makers especially during the design phase as well as common operational work in the domain of radiation protection.

  15. Strategic Windows

    Risberg, Annette; King, David R.; Meglio, Olimpia

    We examine the importance of speed and timing in acquisitions with a framework that identifies management considerations for three interrelated acquisition phases (selection, deal closure and integration) from an acquiring firm’s perspective. Using a process perspective, we pinpoint items within ...... acquisition phases that relate to speed. In particular, we present the idea of time-bounded strategic windows in acquisitions consistent with the notion of kairòs, where opportunities appear and must be pursued at the right time for success to occur....

  16. Strategic Management

    Jeffs, Chris

    2008-01-01

    The Sage Course Companion on Strategic Management is an accessible introduction to the subject that avoids lengthy debate in order to focus on the core concepts. It will help the reader to develop their understanding of the key theories, whilst enabling them to bring diverse topics together in line with course requirements. The Sage Course Companion also provides advice on getting the most from your course work; help with analysing case studies and tips on how to prepare for examinations. Designed to compliment existing strategy textbooks, the Companion provides: -Quick and easy access to the

  17. Computer programme for control and maintenance and object oriented database: application to the realisation of an particle accelerator, the VIVITRON

    The command and control system for the Vivitron, a new generation electrostatic particle accelerator, has been implemented using workstations and front-end computers using VME standards, the whole within an environment of UNIX/VxWorks. This architecture is distributed over an Ethernet network. Measurements and commands of the different sensors and actuators are concentrated in the front-end computers. The development of a second version of the software giving better performance and more functionality is described. X11 based communication has been utilised to transmit all the necessary informations to display parameters within the front-end computers on to the graphic screens. All other communications between processes use the Remote Procedure Call method (RPC). The conception of the system is based largely on the object oriented database O2 which integrates a full description of equipments and the code necessary to manage it. This code is generated by the database. This innovation permits easy maintenance of the system and bypasses the need of a specialist when adding new equipments. The new version of the command and control system has been progressively installed since August 1995. (author)

  18. Jacobian-free Newton-Krylov methods with GPU acceleration for computing nonlinear ship wave patterns

    Pethiyagoda, Ravindra; Moroney, Timothy J; Back, Julian M

    2014-01-01

    The nonlinear problem of steady free-surface flow past a submerged source is considered as a case study for three-dimensional ship wave problems. Of particular interest is the distinctive wedge-shaped wave pattern that forms on the surface of the fluid. By reformulating the governing equations with a standard boundary-integral method, we derive a system of nonlinear algebraic equations that enforce a singular integro-differential equation at each midpoint on a two-dimensional mesh. Our contribution is to solve the system of equations with a Jacobian-free Newton-Krylov method together with a banded preconditioner that is carefully constructed with entries taken from the Jacobian of the linearised problem. Further, we are able to utilise graphics processing unit acceleration to significantly increase the grid refinement and decrease the run-time of our solutions in comparison to schemes that are presently employed in the literature. Our approach provides opportunities to explore the nonlinear features of three-...

  19. Strategic plan

    In November 1989, the Office of Environmental Restoration and Waste Management (EM) was formed within the US Department of Energy (DOE). The EM Program was born of the recognition that a significant national effort was necessary to clean up over 45 years' worth of environmental pollution from DOE operations, including the design and manufacture of nuclear materials and weapons. Within EM, the Deputy Assistant Secretary for Environmental Restoration (EM-40) has been assigned responsibility for the assessment and cleanup of areas and facilities that are no longer a part of active DOE operations, but may be contaminated with varying levels and quantifies of hazardous, radioactive, and n-mixed waste. Decontamination and decommissioning (D ampersand D) activities are managed as an integral part of Envirorunental Restoration cleanup efforts. The Office of Environmental Restoration ensures that risks to the environment and to human health and safety are either eliminated or reduced to prescribed, acceptable levels. This Strategic Plan has been developed to articulate the vision of the Deputy Assistant Secretary for Environmental Restoration and to crystallize the specific objectives of the Environmental Restoration Program. The document summarizes the key planning assumptions that guide or constrain the strategic planning effort, outlines the Environmental Restoration Program's specific objectives, and identifies barriers that could limit the Program's success

  20. Strategic Engagement

    2007-01-01

    “Pakistan regards China as a strategic partner and the bilateral ties have endured the test of time.”Pakistani Prime Minister Shaukat Aziz made the comment during his four-day official visit to China on April 16 when he met Chinese President Hu Jintao,Premier Wen Jiabao and the NPC Standing Committee Chairman Wu Bangguo.His visit to China also included a trip to Boao,where he delivered a keynote speech at the Boao Forum for Asia held on April 20-22. During his stay in Beijing,the two countries signed 13 agreements on cooperation in the fields of space,telecommunications,education and legal assistance,which enhanced an already close strategic partnership. In an interview with Beijing Review reporter Pan Shuangqin,Prime Minister Aziz addressed a number of issues ranging from Asia’s searching for a win-win economic situation to the influence of Sino-Pakistani relations on regional peace.

  1. 看企业信息化战略规划SOA和云计算技术的融入%Look enterprise information technology strategic planning SOA and cloud computing technology integration

    牛昊天

    2014-01-01

    应用SOA和云计算技术服务企业信息化战略规划制定,是支持企业实现自身既定发展战略规划目的的强有力措施,对于企业业务流程创新、提升运营效率、降低信息运作成本有积极意义。本文分析了企业信息化战略规划中存在的问题和战略规划思路,对SOA技术和云计算技术的优势与局限性进行了分析,立足于二者优势设计了SOA和云计算技术融合结构,以更好的服务于企业信息化建设。%Application of SOA and cloud computing enterprise information technology services strategic planning is to support companies achieve their strategic planning purposes established strong measures for business process innovation,improve operational efficiency and reduce operating costs of information has a positive meaning.This paper analyzes the enterprise information problems in strategic planning and strategic planning ideas,on the advantages and limitations of SOA and cloud computing technologies are analyzed,based on the design of both the advantages of SOA and cloud computing technology integration structures to more good service in the construction of enterprise information.

  2. Tempest: Accelerated MS/MS Database Search Software for Heterogeneous Computing Platforms.

    Adamo, Mark E; Gerber, Scott A

    2016-01-01

    MS/MS database search algorithms derive a set of candidate peptide sequences from in silico digest of a protein sequence database, and compute theoretical fragmentation patterns to match these candidates against observed MS/MS spectra. The original Tempest publication described these operations mapped to a CPU-GPU model, in which the CPU (central processing unit) generates peptide candidates that are asynchronously sent to a discrete GPU (graphics processing unit) to be scored against experimental spectra in parallel. The current version of Tempest expands this model, incorporating OpenCL to offer seamless parallelization across multicore CPUs, GPUs, integrated graphics chips, and general-purpose coprocessors. Three protocols describe how to configure and run a Tempest search, including discussion of how to leverage Tempest's unique feature set to produce optimal results. © 2016 by John Wiley & Sons, Inc. PMID:27603022

  3. Computational acceleration of orbital neutral sensor ionizer simulation through phenomena separation

    Font, Gabriel I.

    2016-07-01

    Simulation of orbital phenomena is often difficult because of the non-continuum nature of the flow, which forces the use of particle methods, and the disparate time scales, which make long run times necessary. In this work, the computational work load has been reduced by taking advantage of the low number of collisions between different species. This allows each population of particles to be brought into convergence separately using a time step size optimized for its particular motion. The converged populations are then brought together to simulate low probability phenomena, such as ionization or excitation, on much longer time scales. The result of this technique has the effect of reducing run times by a factor of 103-104. The technique was applied to the simulation of a low earth orbit neutral species sensor with an ionizing element. Comparison with laboratory experiments of ion impacts generated by electron flux shows very good agreement.

  4. CudaPre3D: An Alternative Preprocessing Algorithm for Accelerating 3D Convex Hull Computation on the GPU

    MEI, G.

    2015-05-01

    Full Text Available In the calculating of convex hulls for point sets, a preprocessing procedure that is to filter the input points by discarding non-extreme points is commonly used to improve the computational efficiency. We previously proposed a quite straightforward preprocessing approach for accelerating 2D convex hull computation on the GPU. In this paper, we extend that algorithm to being used in 3D cases. The basic ideas behind these two preprocessing algorithms are similar: first, several groups of extreme points are found according to the original set of input points and several rotated versions of the input set; then, a convex polyhedron is created using the found extreme points; and finally those interior points locating inside the formed convex polyhedron are discarded. Experimental results show that: when employing the proposed preprocessing algorithm, it achieves the speedups of about 4x on average and 5x to 6x in the best cases over the cases where the proposed approach is not used. In addition, more than 95 percent of the input points can be discarded in most experimental tests.

  5. On "enabling systems - A strategic review"

    Nayak, M.R.

    Enabling Systems is a formal strategic planning exercise that sets its direction in an organization for the 21st century. Information technology (IT), Computer Centre (CC) and Analytical Laboratory (AnLab) are identified as three important...

  6. RF accelerators for fusion and strategic defense

    RF linacs have a place in fusion, either in an auxiliary role for materials testing or for direct drivers in heavy-ion fusion. For SDI, the particle-beam technology is an attractive candidate for discrimination missions and also for lethality missions. The free-electron laser is also a forerunner among the laser candidates. in many ways, there is less physics development required for these devices and there is an existing high-power technology. But in all of these technologies, in order to scale them up and then space-base them, there is an enormous amount of work yet to be done

  7. Accelerated Aging of BKC 44306-10 Rigid Polyurethane Foam: FT-IR Spectroscopy, Dimensional Analysis, and Micro Computed Tomography

    Gilbertson, Robert D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Patterson, Brian M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Smith, Zachary [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-01-02

    An accelerated aging study of BKC 44306-10 rigid polyurethane foam was carried out. Foam samples were aged in a nitrogen atmosphere at three different temperatures: 50 °C, 65 °C, and 80 °C. Foam samples were periodically removed from the aging canisters at 1, 3, 6, 9, 12, and 15 month intervals when FT-IR spectroscopy, dimensional analysis, and mechanical testing experiments were performed. Micro Computed Tomography imaging was also employed to study the morphology of the foams. Over the course of the aging study the foams the decreased in size by a magnitude of 0.001 inches per inch of foam. Micro CT showed the heterogeneous nature of the foam structure likely resulting from flow effects during the molding process. The effect of aging on the compression and tensile strength of the foam was minor and no cause for concern. FT-IR spectroscopy was used to follow the foam chemistry. However, it was difficult to draw definitive conclusions about the changes in chemical nature of the materials due to large variability throughout the samples.

  8. Strategizing NATOs Narratives

    Nissen, Thomas Elkjer

    2014-01-01

    , implementation structures, and capabilities can be used to inform the construction of strategic narratives in NATO. Using Libya as a case study he explains that the formulation and implementation of strategic narratives in NATO currently is a fragmented process that rarely takes into account the grand strategic...... objectives formulated in NATO headquarters. Consequently, the future construction of strategic narratives in NATO must be based on the strategic variables....

  9. Relation between Strategic Management Accounting and Strategic Management

    Libuše Šoljaková

    2013-01-01

    This paper analyses relation between strategic management accounting and strategic management. Strategic management accounting should provide information support to strategic management. But most definition of management accounting includes also elements of strategic management. Often strategic management accounting and strategic management diffuse very strongly.

  10. Strategic management or strategic planning for defense?

    Tritten, James John; Roberts, Nancy Charlotte

    1989-01-01

    Approved for public release; distribution is unlimited. This report describes problems associated with strategic planning and strategic management within DoD. Authors offer a series of suggested reforms to enhance mono-level planning and management within DoD, primarily by closer ties with industry planning groups, education, organizational structure, management information systems, and better integration. Additional sponsors are: OSD competitive Strategies Office, OSD Strategic Planning B...

  11. Strategic information security

    Wylder, John

    2003-01-01

    Introduction to Strategic Information SecurityWhat Does It Mean to Be Strategic? Information Security Defined The Security Professional's View of Information Security The Business View of Information SecurityChanges Affecting Business and Risk Management Strategic Security Strategic Security or Security Strategy?Monitoring and MeasurementMoving Forward ORGANIZATIONAL ISSUESThe Life Cycles of Security ManagersIntroductionThe Information Security Manager's Responsibilities The Evolution of Data Security to Information SecurityThe Repository Concept Changing Job Requirements Business Life Cycles

  12. Strategic Risk Management

    Sax, Johanna

    2015-01-01

    The aim of this thesis is to contribute to the literature with an investigation into strategic risk management practices from a strategic management and management accounting perspective. Previous research in strategic risk management has not provided sufficient evidence on the mechanisms behind firm practices, processes and tools for managing strategic risks, and their contingencies for value creation. In particular, the purpose of the thesis has been to fill the gaps in the l...

  13. Strategic Leadership Reconsidered

    Davies, Brent; Davies, Barbara J.

    2005-01-01

    This paper will address the challenge of how strategic leadership can be defined and articulated to provide a framework for developing a strategically focused school drawing on a NCSL research project. The paper is structured into three main parts. Part one outlines the elements that comprise a strategically focused school, develops an…

  14. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  15. Optimal Strategic Pricing of Reproducible Consumer Products

    Fernando Nascimento; Vanhonacker, Wilfried R.

    1988-01-01

    This paper investigates the strategic pricing of consumer durable products which can be acquired through either purchase or reproduction (e.g., computer software). As copy piracy results in an opportunity loss, its adverse effect on profits needs to be incorporated in strategic decisions such as pricing. Using a dual diffusion model which parsimoniously describes sales and copying, and employing control theory methodology, optimal price trajectories are derived for the period of monopoly. The...

  16. Strategic Cooperation in Cost Sharing Games

    Hoefer, Martin

    2010-01-01

    In this paper we consider strategic cost sharing games with so-called arbitrary sharing based on various combinatorial optimization problems, such as vertex and set cover, facility location, and network design problems. We concentrate on the existence and computational complexity of strong equilibria, in which no coalition can improve the cost of each of its members. Our main result reveals a connection between strong equilibrium in strategic games and the core in traditional coalitional cost...

  17. Learning without experience: Understanding the strategic implications of deregulation and competition in the electricity industry

    Lomi, A. [School of Economics, University of Bologna, Bologna (Italy); Larsen, E.R. [Dept. of Managements Systems and Information, City University Business School, London (United Kingdom)

    1998-11-01

    As deregulation of the electricity industry continues to gain momentum around the world, electricity companies face unprecedented challenges. Competitive complexity and intensity will increase substantially as deregulated companies find themselves competing in new industries, with new rules, against unfamiliar competitors - and without any history to learn from. We describe the different kinds of strategic issues that newly deregulated utility companies are facing, and the risks that strategic issues implicate. We identify a number of problems induced by experiential learning under conditions of competence-destroying change, and we illustrate ways in which companies can activate history-independent learning processes. We suggest that Micro worlds - a new generation of computer-based learning environments made possible by conceptual and technological progress in the fields of system dynamics and systems thinking - are particularly appropriate tools to accelerate and enhance organizational and managerial learning under conditions of increased competitive complexity. (au)

  18. How Strategic are Strategic Information Systems?

    Alan Eardley; Philip Powell

    1996-01-01

    There are many examples of information systems which are claimed to have created and sustained competitive advantage, allowed beneficial collaboration or simply ensured the continued survival of the organisations which used them These systems are often referred to as being 'strategic'. This paper argues that many of the examples of strategic information systems as reported in the literature are not sufficiently critical in determining whether the systems meet the generally accepted definition...

  19. Emerging Multinational Companies and Strategic Fit

    Gammeltoft, Peter; Filatotchev, Igor; Hobdari, Bersant

    2012-01-01

    framework of strategic fit. This theoretical approach may provide important insights concerning both the original impetus to the contemporary acceleration of these flows and their specific features. By building on the early literature on fit in strategic management we outline an institutional framework...... which considers flows of outward investment from emerging economies as framed by institutional pressures at the firm level towards achieving fit between the environment, strategies, structures, resources and practices of the firm. For the multinational firm this fit must be attained along multiple...

  20. 7 March 2013 -Stanford University Professor N. McKeown FREng, Electrical Engineering and Computer Science and B. Leslie, Creative Labs visiting CERN Control Centre and the LHC tunnel with Director for Accelerators and Technology S. Myers.

    Anna Pantelia

    2013-01-01

    7 March 2013 -Stanford University Professor N. McKeown FREng, Electrical Engineering and Computer Science and B. Leslie, Creative Labs visiting CERN Control Centre and the LHC tunnel with Director for Accelerators and Technology S. Myers.

  1. Strategic planning and republicanism

    Mazza Luigi

    2010-01-01

    The paper develops two main linked themes: (i) strategic planning reveals in practice limits that are hard to overcome; (ii) a complete planning system is efficacy only in the framework of a republican political, social and government culture. It is argued that the growing disappointment associated to strategic planning practices, may be due to excessive expectations, and the difficulties encountered by strategic planning are traced to three main issues: (a) the relationship between politics ...

  2. Strategic thinking in business

    Špatenková, Lenka

    2011-01-01

    Strategic thinking in a bussines – summary This diploma work deals with the issue of strategic thinking in business, which is an inseparable part of the development of company strategy. The utilisation of the principles of strategies thinking as well as the processes and analyses of strategic management is shown on the example of REBYTO BEAR Ltd. The Theoretical Background Chapter provides explanation of important terminology whose knowledge is necessary for the practical use of strate...

  3. Strategic Marketing Planning Audit

    Violeta Radulescu

    2012-01-01

    Market-oriented strategic planning is the process of defining and maintaining a viable relationship between objectives, training of personnel and resources of an organization, on the one hand and market conditions, on the other hand. Strategic marketing planning is an integral part of the strategic planning process of the organization. For successful marketing organization to obtain a competitive advantage, but also to measure the effectiveness of marketing actions the company is required to ...

  4. Telepreneurship : Strategic bliss

    Erasmus, Izak Pierre

    2010-01-01

    The strategic management literature indirectly considers entrepreneurship as a subset of strategy, and the historical evolution of the field, specifically that of the Entrepreneurship division of the Academy of Management. Schendel (1990) which placed great emphasis on the topic of entrepreneurship and admitted that some argue that entrepreneurship is at the very heart of strategic management. This thesis explores the strategic use of entrepreneurship in the telecommunication industry. Throug...

  5. Strategic Management: General Concepts

    Shahram Tofighi

    2010-01-01

    In the era after substitution of long term planning by strategic planning, it was wished that the managers could act more successful in implementing their plans. The outcomes were far from the expected, there were minor improvements. In the organizations, a plenty of namely strategic plans has been developed during strategic planning processes, but most of these plans have been kept in the shelves, a few of them played their roles as guiding documents for the entire organization. What are the...

  6. Sandia Strategic Plan 1997

    NONE

    1997-12-01

    Sandia embarked on its first exercise in corporate strategic planning during the winter of 1989. The results of that effort were disseminated with the publication of Strategic Plan 1990. Four years later Sandia conducted their second major planning effort and published Strategic Plan 1994. Sandia`s 1994 planning effort linked very clearly to the Department of Energy`s first strategic plan, Fueling a Competitive Economy. It benefited as well from the leadership of Lockheed Martin Corporation, the management and operating contractor. Lockheed Martin`s corporate success is founded on visionary strategic planning and annual operational planning driven by customer requirements and technology opportunities. In 1996 Sandia conducted another major planning effort that resulted in the development of eight long-term Strategic Objectives. Strategic Plan 1997 differs from its predecessors in that the robust elements of previous efforts have been integrated into one comprehensive body. The changes implemented so far have helped establish a living strategic plan with a stronger business focus and with clear deployment throughout Sandia. The concept of a personal line of sight for all employees to this strategic plan and its objectives, goals, and annual milestones is becoming a reality.

  7. How Strategic are Strategic Information Systems?

    Alan Eardley

    1996-11-01

    Full Text Available There are many examples of information systems which are claimed to have created and sustained competitive advantage, allowed beneficial collaboration or simply ensured the continued survival of the organisations which used them These systems are often referred to as being 'strategic'. This paper argues that many of the examples of strategic information systems as reported in the literature are not sufficiently critical in determining whether the systems meet the generally accepted definition of the term 'strategic' - that of achieving sustainable competitive advantage. Eight of the information systems considered to be strategic are examined here from the standpoint of one widely-accepted 'competition' framework- Porter's model of industry competition . The framework is then used to question the linkage between the information systems and the mechanisms which are required for the enactment of strategic business objectives based on competition. Conclusions indicate that the systems are compatible with Porter's framework. Finally, some limitations of the framework are discussed and aspects of the systems which extend beyond the framework are highlighted

  8. The Relationship between Firms’ Strategic Orientations and Strategic Planning Process

    Hasnanywati Hassan

    2010-01-01

    The study examines the quantity surveying (QS) firms’ strategic orientation and its relation to strategic planningprocess. The strategic orientations based on Miles and Snow typology were used to identify the strategicorientation for QS firms. The strategic planning process that includes the efforts of strategic planning, degree ofinvolvement in strategic planning and formality were also determined. The declined period in Malaysianconstruction industry from year 2001 to 2005 has been determin...

  9. Manage "Human Capital" Strategically

    Odden, Allan

    2011-01-01

    To strategically manage human capital in education means restructuring the entire human resource system so that schools not only recruit and retain smart and capable individuals, but also manage them in ways that support the strategic directions of the organization. These management practices must be aligned with a district's education improvement…

  10. Developing Strategic Leaders.

    Carter, Patricia; Terwilliger, Leatha; Alfred, Richard L.; Hartleb, David; Simone, Beverly

    2002-01-01

    Highlights the importance of developing community college leaders capable of demonstrating strategic leadership and responding to the global forces that influence community college education. Discusses the Consortium for Community College Development's Strategic Leadership Forum and its principles, format, content, and early results. (RC)

  11. Strategic Risk Assessment

    Derleth, Jason; Lobia, Marcus

    2009-01-01

    This slide presentation provides an overview of the attempt to develop and demonstrate a methodology for the comparative assessment of risks across the entire portfolio of NASA projects and assets. It includes information about strategic risk identification, normalizing strategic risks, calculation of relative risk score, and implementation options.

  12. Strategic environmental assessment

    Kørnøv, Lone

    1997-01-01

    The integration of environmental considerations into strategic decision making is recognized as a key to achieving sustainability. In the European Union a draft directive on Strategic Environmental Assessment (SEA) is currently being reviewed by the member states. The nature of the proposed SEA...

  13. Strategic Leadership in Schools

    Williams, Henry S.; Johnson, Teryl L.

    2013-01-01

    Strategic leadership is built upon traits and actions that encompass the successful execution of all leadership styles. In a world that is rapidly changing, strategic leadership in schools guides school leader through assuring constant improvement process by anticipating future trends and planning for them and noting that plans must be flexible to…

  14. Contribution to the algorithmic and efficient programming of new parallel architectures including accelerators for neutron physics and shielding computations

    In science, simulation is a key process for research or validation. Modern computer technology allows faster numerical experiments, which are cheaper than real models. In the field of neutron simulation, the calculation of eigenvalues is one of the key challenges. The complexity of these problems is such that a lot of computing power may be necessary. The work of this thesis is first the evaluation of new computing hardware such as graphics card or massively multi-core chips, and their application to eigenvalue problems for neutron simulation. Then, in order to address the massive parallelism of supercomputers national, we also study the use of asynchronous hybrid methods for solving eigenvalue problems with this very high level of parallelism. Then we experiment the work of this research on several national supercomputers such as the Titane hybrid machine of the Computing Center, Research and Technology (CCRT), the Curie machine of the Very Large Computing Centre (TGCC), currently being installed, and the Hopper machine at the Lawrence Berkeley National Laboratory (LBNL). We also do our experiments on local workstations to illustrate the interest of this research in an everyday use with local computing resources. (author)

  15. A 3D GPU-accelerated MPI-parallel computational tool for simulating interaction of moving rigid bodies with two-fluid flows

    Pathak, Ashish; Raessi, Mehdi

    2014-11-01

    We present a 3D MPI-parallel, GPU-accelerated computational tool that captures the interaction between a moving rigid body and two-fluid flows. Although the immediate application is the study of ocean wave energy converters (WECs), the model was developed at a general level and can be used in other applications. Solving the full Navier-Stokes equations, the model is able to capture non-linear effects, including wave-breaking and fluid-structure interaction, that have significant impact on WEC performance. To transport mass and momentum, we use a consistent scheme that can handle large density ratios (e.g. air/water). We present a novel reconstruction scheme for resolving three-phase (solid-liquid-gas) cells in the volume-of-fluid context, where the fluid interface orientation is estimated via a minimization procedure, while imposing a contact angle. The reconstruction allows for accurate mass and momentum transport in the vicinity of three-phase cells. The fast-fictitious-domain method is used for capturing the interaction between a moving rigid body and two-fluid flow. The pressure Poisson solver is accelerated using GPUs in the MPI framework. We present results of an array of test cases devised to assess the performance and accuracy of the computational tool.

  16. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are

  17. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    Xu, Zuwei; Zhao, Haibo, E-mail: klinsmannzhb@163.com; Zheng, Chuguang

    2015-01-15

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are

  18. Search for reducing methodology of acceleration aging time in reversed sequential application of heat and radiation using numerical computational program

    Two consecutive numerical calculations on degradation of polymeric insulations under thermal and radiation environment are carried out to simulate so-called reversal sequential acceleration test. The calculation is aiming at searching the test conditions which provide material damage equivalent to the case of simultaneous exposure of heat and radiation. The total aging time in reversal sequential method becomes the shortest if all amount of target degradation is assigned to radiation process with the strongest allowable dose rate. If the total time becomes shorter than this using the heating process, radiation process in any dose rate conditions would prolong the testing time. (author)

  19. On strategic spatial planning

    Tošić Branka

    2014-01-01

    Full Text Available The goal of this paper is to explain the origin and development of strategic spatial planning, to show complex features and highlight the differences and/or advantages over traditional, physical spatial planning. Strategic spatial planning is seen as one of approaches in legally defined planning documents, and throughout the display of properties of sectoral national strategies, as well as issues of strategic planning at the local level in Serbia. The strategic approach is clearly recognized at the national and sub-national level of spatial planning in European countries and in our country. It has been confirmed by the goals outlined in documents of the European Union and Serbia that promote the grounds of territorial cohesion and strategic integrated planning, emphasizing cooperation and the principles of sustainable spatial development. [Projekat Ministarstva nauke Republike Srbije, br. 176017

  20. New challenges for HEP computing: RHIC [Relativistic Heavy Ion Collider] and CEBAF [Continuous Electron Beam Accelerator Facility

    We will look at two facilities; RHIC and CEBF. CEBF is in the construction phase, RHIC is about to begin construction. For each of them, we examine the kinds of physics measurements that motivated their construction, and the implications of these experiments for computing. Emphasis will be on on-line requirements, driven by the data rates produced by these experiments

  1. Golden-Finger and Back-Door: Two HW/SW Mechanisms for Accelerating Multicore Computer Systems

    Slo-Li Chu

    2012-01-01

    Full Text Available Continuously requirements of high-performance computing make the computer system adopt more processors within a system to improve the parallelism and throughput. Although multiple processing cores are implemented in a computer system, the complicated hardware communication mechanism between processors will decrease the performance of overall system. Besides, the unsuitable process scheduling mechanism of conventional operating system can not fully utilize the computation power of additional processors. Accordingly, this paper provides two mechanisms to overcome the above challenges by using hardware and software mechanisms, respectively. In software aspect, we propose a tool, called Golden-Finger, to dynamically adjust the scheduling policy of the process scheduler in Linux. This software mechanism can improve the performance of the specified process by occupying a processor solely. In hardware aspect, we design an effective hardware mechanism, called Back-Door, to communicate two independent processors which can not be operated together, such as the dual PowerPC 405 cores in the Xilinx ML310 system. The experimental results reveal that the two mechanisms can obtain significant performance enhancements.

  2. Infrastratego. Strategic behavior in infrastructural sectors

    The strategic behavior and the working of counter arrangements in eight infrastructural sectors are described: public rail transport; public bus transport; road maintenance in Sweden and Finland; natural gas distribution; electricity market in California, USA; competition in the computer industry; the auction of UMTS (Universal Mobile Telecommunication System) frequencies; and site sharing (use of antennas)

  3. Next Processor Module: A Hardware Accelerator of UT699 LEON3-FT System for On-Board Computer Software Simulation

    Langlois, Serge; Fouquet, Olivier; Gouy, Yann; Riant, David

    2014-08-01

    On-Board Computers (OBC) are more and more using integrated systems on-chip (SOC) that embed processors running from 50MHz up to several hundreds of MHz, and around which are plugged some dedicated communication controllers together with other Input/Output channels. For ground testing and On-Board SoftWare (OBSW) validation purpose, a representative simulation of these systems, faster than real-time and with cycle-true timing of execution, is not achieved with current purely software simulators. Since a few years some hybrid solutions where put in place ([1], [2]), including hardware in the loop so as to add accuracy and performance in the computer software simulation. This paper presents the results of the works engaged by Thales Alenia Space (TAS-F) at the end of 2010, that led to a validated HW simulator of the UT699 by mid- 2012 and that is now qualified and fully used in operational contexts.

  4. Accelerator shielding benchmark problems

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  5. The foxhole accelerating structure

    This report examines some properties of a new type of open accelerating structure. It consists of a series of rectangular cavities, which we call foxholes, joined by a beam channel. The power for accelerating the particles comes from an external radiation source and enters the cavities through their open upper surfaces. Analytic and computer calculations are presented showing that the foxhole is a suitable structure for accelerating relativistic electrons

  6. Accelerator shielding benchmark problems

    Hirayama, H.; Ban, S.; Nakamura, T. [and others

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author).

  7. Strategic Management: General Concepts

    Shahram Tofighi

    2010-05-01

    Full Text Available In the era after substitution of long term planning by strategic planning, it was wished that the managers could act more successful in implementing their plans. The outcomes were far from the expected, there were minor improvements. In the organizations, a plenty of namely strategic plans has been developed during strategic planning processes, but most of these plans have been kept in the shelves, a few of them played their roles as guiding documents for the entire organization. What are the factors inducing such outcomes? Different scientists have offered a variety of justifications, according to their expe-riences."nThe first examined issue was misunderstanding stra-tegic planning by the managers and staff; it means the strategic planning process may be executed erroneously, and what they had expected from this process was not accurate. Substantially, strategic planning looks at the future and coming situations, and is designed to answer the questions which emerge in the future. Unfortunately, this critical and fundamental characteristic of strategic planning is obscured."nStrategic planning conveys the concept of drawing the future and developing a set of different probable scenarios along with defining a set of solutions in order to combat undesirable coming conditions for positioning the system or business. It helps organizations save themselves safe and maintain them successful. In other words, in strategic planning efforts we are seeking solutions fit for problems which will appear in the future for the conditions that will emerge in the future. Unfortunately, most of strategic plans which have been developed in the organizations lack this important and critical characteristic; I mean in most of them the developers had offered solutions in order to solve today's problems in the future! "nThe second issue which was considered by the scientists, was the task of ensuring the continuity of effectiveness of the planning, there was a

  8. Strategic planning in transition

    Olesen, Kristian; Richardson, Tim

    2012-01-01

    In this paper, we analyse how contested transitions in planning rationalities and spatial logics have shaped the processes and outputs of recent episodes of Danish ‘strategic spatial planning’. The practice of ‘strategic spatial planning’ in Denmark has undergone a concerted reorientation in the...... recent years as a consequence of an emerging neoliberal agenda promoting a growth-oriented planning approach emphasising a new spatial logic of growth centres in the major cities and urban regions. The analysis, of the three planning episodes, at different subnational scales, highlights how this new...... style of ‘strategic spatial planning’ with its associated spatial logics is continuously challenged by a persistent regulatory, top-down rationality of ‘strategic spatial planning’, rooted in spatial Keynesianism, which has long characterised the Danish approach. The findings reveal the emergence of a...

  9. Complex Strategic Choices

    Leleur, Steen

    . Complex Strategic Choices provides clear principles and methods which can guide and support strategic decision making to face the many current challenges. By considering ways in which planning practices can be renewed and exploring the possibilities for acquiring awareness and tools to add value to...... resulting in new material stemming from and focusing on practical application of a systemic approach. The outcome is a coherent and flexible approach named systemic planning. The inclusion of both the theoretical and practical aspects of systemic planning makes this book a key resource for researchers and...... students in the field of planning and decision analysis as well as practitioners dealing with strategic analysis and decision making. More broadly, Complex Strategic Choices acts as guide for professionals and students involved in complex planning tasks across several fields such as business and...

  10. Strategic agility for nursing leadership.

    Shirey, Maria R

    2015-06-01

    This department highlights change management strategies that may be successful in strategically planning and executing organizational change. In this article, the author discusses strategic agility as an important leadership competency and offers approaches for incorporating strategic agility in healthcare systems. A strategic agility checklist and infrastructure-building approach are presented. PMID:26010278

  11. Computational study of transport and energy deposition of intense laser-accelerated proton beams in solid density matter

    Kim, J.; McGuffey, C.; Qiao, B.; Beg, F. N.; Wei, M. S.; Grabowski, P. E.

    2015-11-01

    With intense proton beams accelerated by high power short pulse lasers, solid targets are isochorically heated to become partially-ionized warm or hot dense matter. In this regime, the thermodynamic state of the matter significantly changes, varying the proton stopping power where both bound and free electrons contribute. Additionally, collective beam-matter interaction becomes important to the beam transport. We present self-consistent hybrid particle-in-cell (PIC) simulation results of proton beam transport and energy deposition in solid-density matter, where the individual proton stopping and the collective effects are taken into account simultaneously with updates of stopping power in the varying target conditions and kinetic motions of the beam in the driven fields. Broadening of propagation range and self-focusing of the beam led to unexpected target heating by the intense proton beams, with dependence on the beam profiles and target conditions. The behavior is specifically studied for the case of an experimentally measured proton beam from the 1.25 kJ, 10 ps OMEGA EP laser transporting through metal foils. This work was supported by the U.S. DOE under Contracts No. DE-NA0002034 and No. DE-AC52-07NA27344 and by the U.S. AFOSR under Contract FA9550-14-1-0346.

  12. Vol. 34 - Optimization of quench protection heater performance in high-field accelerator magnets through computational and experimental analysis

    Salmi, Tiina

    2016-01-01

    Superconducting accelerator magnets with increasingly hi gh magnetic fields are being designed to improve the performance of the Large Hadron Collider (LHC) at CERN. One of the technical challenges is the magnet quench p rotection, i.e., preventing damage in the case of an unexpected loss of superc onductivity and the heat generation related to that. Traditionally this is d one by disconnecting the magnet current supply and using so-called protection he aters. The heaters suppress the superconducting state across a large fraction of the winding thus leading to a uniform dissipation of the stored energy. Preli minary studies suggested that the high-field Nb 3 Sn magnets under development for the LHC luminosity upgrade (HiLumi) could not be reliably protected using the existing heaters. In this thesis work I analyzed in detail the present state-of-the-art protection heater technology, aiming to optimize its perfo rmance and evaluate the prospects in high-field magnet protection. The heater efficiency analyses ...

  13. Strategically Stable Technological Alliance

    Nikolai V. Kolabutin; Zenkevich, Nikolay A. (Eds.)

    2011-01-01

    There are two conditions that are important to investigate the stability problem when considering the long-term cooperative agreements: the dynamic stability (time consistency), and strategic stability. This paper presents the results based on the profit distribution procedure (PRP), which implement a model of stable cooperation. The paper also shows the relationship between the dynamic and strategic stability of cooperative agreement and the numerical results showing the influence of paramet...

  14. 2015 Enterprise Strategic Vision

    None

    2015-08-01

    This document aligns with the Department of Energy Strategic Plan for 2014-2018 and provides a framework for integrating our missions and direction for pursuing DOE’s strategic goals. The vision is a guide to advancing world-class science and engineering, supporting our people, modernizing our infrastructure, and developing a management culture that operates a safe and secure enterprise in an efficient manner.

  15. International Strategic Alliance

    Arif, Mohd.

    2008-01-01

    International Strategic Alliance is the combination of two or more firm future objective, which achieved by together practices of the MNCs. The term "strategic alliance" can means many things. In its broadest sense, it can apply to virtually any of collaboration between two or more firms, including one or more of the following activities: design contracts; technology transfer agreements; joint product development; distribution agreement; marketing and promotional collaboration; intellectual a...

  16. Computer simulation of rocket/missile safing and arming mechanism (containing pin pallet runaway escapement, three-pass involute gear train and acceleration driven rotor)

    Gorman, P. T.; Tepper, F. R.

    1986-03-01

    A complete simulation of missile and rocket safing and arming (S&A) mechanisms containing an acceleration-driven rotor, a three-pass involute gear train, and a pin pallet runaway escapement was developed. In addition, a modification to this simulation was formulated for the special case of the PATRIOT M143 S&A mechanism which has a pair of driving gears in addition to the three-pass gear train. The three motion regimes involved in escapement operation - coupled motion, free motion, and impact - are considered in the computer simulation. The simulation determines both the arming time of the device and the non-impact contact forces of all interacting components. The program permits parametric studies to be made, and is capable of analyzing pallets with arbitrarily located centers of mass. A sample simulation of the PATRIOT M143 S&A in an 11.9 g constant acceleration arming test was run. The results were in good agreement with laboratory test data.

  17. THE STRATEGIC OPTIONS IN INVESTMENT PROJECTS VALUATION

    VIOLETA SĂCUI

    2012-11-01

    Full Text Available The topic of real options applies the option valuation techniques to capital budgeting exercises in which a project is coupled with a put or call option. In many project valuation settings, the firm has one or more options to make strategic changes to the project during its life. These strategic options, which are known as real options, are typically ignored in standard discounted cash-flow analysis where a single expected present value is computed. This paper presents the types of real options that are met in economic activity.

  18. Can Accelerators Accelerate Learning?

    The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ)[1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.

  19. Cognitive ability and the effect of strategic uncertainty

    Hanaki, Nobuyuki; Jacquemet, Nicolas; Luchini, Stéphane; Zylbersztejn, Adam

    2014-01-01

    How is one's cognitive ability related to the way one responds to strategic uncertainty? We address this question by conducting a set of experiments in simple 2 × 2 dominance solvable coordination games. Our experiments involve two main treatments: one in which two human subjects interact, and another in which one human subject interacts with a computer program whose behavior is known. By making the behavior of the computer perfectly predictable, the latter treatment eliminates strategic unce...

  20. Cognitive ability and the effect of strategic uncertainty

    Hanaki, Nobuyuki; Jacquemet, Nicolas; Luchini, Stéphane; Zylbersztejn, Adam

    2015-01-01

    How is one's cognitive ability related to the way one responds to strategic uncertainty? We address this question by conducting a set of experiments in simple 2 x 2 dominance solvable coordination games. Our experiments involve two main treatments: one in which two human subjects interact, and another in which one human subject interacts with a computer program whose behavior is known. By making the behavior of the computer perfectly predictable, the latter treatment eliminates strategic unce...

  1. Cognitive Ability and the Effect of Strategic Uncertainty

    Nobuyuki Hanaki; Nicolas Jacquemet; Stéphane Luchini; Adam Zylberstejn

    2014-01-01

    How is one’s cognitive ability related to the way one responds to strategic uncertainty? We address this question by conducting a set of experiments in simple 2 x 2 dominance solvable coordination games. Our experiments involve two main treatments: one in which two human subjects interact, and another in which one human subject interacts with a computer program whose behavior is known. By making the behavior of the computer perfectly predictable, the latter treatment eliminates strategic unce...

  2. Strategic analysis of the company

    Matoušková, Irena

    2012-01-01

    Strategic analysis of the company In my thesis I developed a strategic analysis of the company Pacovské strojírny a.s. I describe the various methods of internal and external strategic analysis in the theoretical part. I followed the methods used in the practical part. In an internal strategic analysis, I focused on the identification of internal resources and capabilities, the financial analysis and the chain of creating value. External strategic analysis includes PEST analysis, Porter's fiv...

  3. Restriction of the use of hazardous substances (RoHS in the personal computer segment: analysis of the strategic adoption by the manufacturers settled in Brazil

    Ademir Brescansin

    2015-09-01

    Full Text Available The enactment of the RoHS Directive (Restriction of Hazardous Substances in 2003, limiting the use of certain hazardous substances in electronic equipment has forced companies to adjust their products to comply with this legislation. Even in the absence of similar legislation in Brazil, manufacturers of personal computers which are located in this country have been seen to adopt RoHS for products sold in the domestic market and abroad. The purpose of this study is to analyze whether these manufacturers have really adopted RoHS, focusing on their motivations, concerns, and benefits. This is an exploratory study based on literature review and interviews with HP, Dell, Sony, Lenovo, Samsung, LG, Itautec, and Positivo, using summative content analysis. The results showed that initially, global companies adopted RoHS to market products in Europe, and later expanded this practice to all products. Brazilian companies, however, adopted RoHS to participate in the government’s sustainable procurement bidding processes. It is expected that this study can assist manufacturers in developing strategies for reducing or eliminating hazardous substances in their products and processes, as well as help the government to formulate public policies on reducing risks of environmental contamination.

  4. Strategic forces briefing

    Bing, G.; Chrzanowski, P.; May, M.; Nordyke, M.

    1989-04-06

    The Strategic Forces Briefing'' is our attempt, accomplished over the past several months, to outline and highlight the more significant strategic force issues that must be addressed in the near future. Some issues are recurrent: the need for an effective modernized Triad and a constant concern for force survivability. Some issues derive from arms control: the Strategic Arms Reduction Talks (SALT) are sufficiently advanced to set broad numerical limits on forces, but not so constraining as to preclude choices among weapon systems and deployment modes. Finally, a new administration faced with serious budgetary problems must strive for the most effective strategic forces limited dollars can buy and support. A review of strategic forces logically begins with consideration of the missions the forces are charged with. We begin the briefing with a short review of targeting policy and implementation within the constraints of available unclassified information. We then review each element of the Triad with sections on SLBMs, ICBMs, and Air-Breathing (bomber and cruise missile) systems. A short section at the end deals with the potential impact of strategic defense on offensive force planning. We consider ABM, ASAT, and air defense; but we do not attempt to address the technical issues of strategic defense per se. The final section gives a brief overview of the tritium supply problem. We conclude with a summary of recommendations that emerge from our review. The results of calculation on the effectiveness of various weapon systems as a function of cost that are presented in the briefing are by Paul Chrzanowski.

  5. Linear Accelerators

    Vretenar, M

    2014-01-01

    The main features of radio-frequency linear accelerators are introduced, reviewing the different types of accelerating structures and presenting the main characteristics aspects of linac beam dynamics.

  6. The Relationship between Firms’ Strategic Orientations and Strategic Planning Process

    Hasnanywati Hassan

    2010-10-01

    Full Text Available The study examines the quantity surveying (QS firms’ strategic orientation and its relation to strategic planningprocess. The strategic orientations based on Miles and Snow typology were used to identify the strategicorientation for QS firms. The strategic planning process that includes the efforts of strategic planning, degree ofinvolvement in strategic planning and formality were also determined. The declined period in Malaysianconstruction industry from year 2001 to 2005 has been determined. The research aims to establish the strategicorientations of QS firms and the strategic planning process carried out by QS firms in terms of processes, degree ofinvolvement and formality. The strategic planning process is examined using qualitative and quantitative data tothirty four QS firms in Malaysia. Spearman’s rank correlation was used to test the hypotheses. The research is partof the doctoral research. The study concludes that there are significant correlations between the QS firms’ strategicorientation (Prospector and Defender and efforts in strategic planning process during declined period. The QSfirms’ strategic orientation also correlated with the degree of involvement of top management and senior quantitysurveyors in all three stages of strategic planning. In addition, formalized strategic planning depends on theDefender strategic orientation.

  7. COMPUTING

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  8. Strategic self-ignorance

    Thunström, Linda; Nordström, Leif Jonas; Shogren, Jason F.;

    2016-01-01

    We examine strategic self-ignorance—the use of ignorance as an excuse to over-indulge in pleasurable activities that may be harmful to one’s future self. Our model shows that guilt aversion provides a behavioral rationale for present-biased agents to avoid information about negative future impacts...... of such activities. We then confront our model with data from an experiment using prepared, restaurant-style meals—a good that is transparent in immediate pleasure (taste) but non-transparent in future harm (calories). Our results support the notion that strategic self-ignorance matters: nearly three...... of five subjects (58%) chose to ignore free information on calorie content, leading at-risk subjects to consume significantly more calories. We also find evidence consistent with our model on the determinants of strategic self-ignorance....

  9. Strategic Self-Ignorance

    Thunström, Linda; Nordström, Leif Jonas; Shogren, Jason F.;

    We examine strategic self-ignorance—the use of ignorance as an excuse to overindulge in pleasurable activities that may be harmful to one’s future self. Our model shows that guilt aversion provides a behavioral rationale for present-biased agents to avoid information about negative future impacts...... of such activities. We then confront our model with data from an experiment using prepared, restaurant-style meals — a good that is transparent in immediate pleasure (taste) but non-transparent in future harm (calories). Our results support the notion that strategic self-ignorance matters: nearly...... three of five subjects (58 percent) chose to ignore free information on calorie content, leading at-risk subjects to consume significantly more calories. We also find evidence consistent with our model on the determinants of strategic self-ignorance....

  10. Strategic Communication Institutionalized

    Kjeldsen, Anna Karina

    2013-01-01

    of institutionalization when strategic communication is not yet visible as organizational practice, and how can such detections provide explanation for the later outcome of the process? (2) How can studies of strategic communication benefit from an institutional perspective? How can the virus......The aim of this article is to discuss the strength of Scandinavian neo-institutionalism in general and the virus metaphor in particular as analytical lens when studying processes of institutionalization. The article addresses the following two questions: (1) How do we detect the very early stages...... metaphor generate a deeper understanding of the mechanisms that interact from the time an organization is exposed to a new organizational idea such as strategic communication until it surfaces in the form of symptoms such as mission and vision statements, communication manuals and communication positions...