WorldWideScience

Sample records for accelerated strategic computing

  1. Delivering Insight The History of the Accelerated Strategic Computing Initiative

    Energy Technology Data Exchange (ETDEWEB)

    Larzelere II, A R

    2007-01-03

    The history of the Accelerated Strategic Computing Initiative (ASCI) tells of the development of computational simulation into a third fundamental piece of the scientific method, on a par with theory and experiment. ASCI did not invent the idea, nor was it alone in bringing it to fruition. But ASCI provided the wherewithal - hardware, software, environment, funding, and, most of all, the urgency - that made it happen. On October 1, 2005, the Initiative completed its tenth year of funding. The advances made by ASCI over its first decade are truly incredible. Lawrence Livermore, Los Alamos, and Sandia National Laboratories, along with leadership provided by the Department of Energy's Defense Programs Headquarters, fundamentally changed computational simulation and how it is used to enable scientific insight. To do this, astounding advances were made in simulation applications, computing platforms, and user environments. ASCI dramatically changed existing - and forged new - relationships, both among the Laboratories and with outside partners. By its tenth anniversary, despite daunting challenges, ASCI had accomplished all of the major goals set at its beginning. The history of ASCI is about the vision, leadership, endurance, and partnerships that made these advances possible.

  2. The DOE Accelerated Strategic Computing Initiative: Challenges and opportunities for predictive materials simulation capabilities

    Science.gov (United States)

    Mailhiot, Christian

    1998-05-01

    In response to the unprecedented national security challenges emerging from the end of nuclear testing, the Defense Programs of the Department of Energy has developed a long-term strategic plan based on a vigorous Science-Based Stockpile Stewardship (SBSS) program. The main objective of the SBSS program is to ensure confidence in the performance, safety, and reliability of the stockpile on the basis of a fundamental science-based approach. A central element of this approach is the development of predictive, ‘full-physics’, full-scale computer simulation tools. As a critical component of the SBSS program, the Accelerated Strategic Computing Initiative (ASCI) was established to provide the required advances in computer platforms and to enable predictive, physics-based simulation capabilities. In order to achieve the ASCI goals, fundamental problems in the fields of computer and physical sciences of great significance to the entire scientific community must be successfully solved. Foremost among the key elements needed to develop predictive simulation capabilities, the development of improved physics-based materials models is a cornerstone. We indicate some of the materials theory, modeling, and simulation challenges and illustrate how the ASCI program will enable both the hardware and the software tools necessary to advance the state-of-the-art in the field of computational condensed matter and materials physics.

  3. Accelerating Strategic Change Through Action Learning

    DEFF Research Database (Denmark)

    Younger, Jon; Sørensen, René; Cleemann, Christine;

    2013-01-01

    Purpose – The purpose of this paper is to describe how a leading global company used action-learning based leadership development to accelerate strategic culture change. Design/methodology/approach – It describes the need for change, and the methodology and approach by which the initiative, Impac...

  4. National Strategic Computing Initiative Strategic Plan

    Science.gov (United States)

    2016-07-01

    that define today’s international HPC landscape. The inclusive nature of this public-private collaboration endeavors to benefit the public and...that supports its science and national security missions and may also benefit the Nation from an economic and educational standpoint. To ensure that...collaboration to ensure shared benefit across government, academia, and industry. Objective 1: The NSCI seeks to accelerate the development of HPC systems

  5. Applications of the Strategic Defense Initiative's compact accelerators

    Science.gov (United States)

    Montanarelli, Nick; Lynch, Ted

    1991-12-01

    The Strategic Defense Initiative's (SDI) investment in particle accelerator technology for its directed energy weapons program has produced breakthroughs in the size and power of new accelerators. These accelerators, in turn, have produced spinoffs in several areas: the radio frequency quadrupole linear accelerator (RFQ linac) was recently incorporated into the design of a cancer therapy unit at the Loma Linda University Medical Center, an SDI-sponsored compact induction linear accelerator may replace Cobalt-60 radiation and hazardous ethylene-oxide as a method for sterilizing medical products, and other SDIO-funded accelerators may be used to produce the radioactive isotopes oxygen-15, nitrogen-13, carbon-11, and fluorine-18 for positron emission tomography (PET). Other applications of these accelerators include bomb detection, non-destructive inspection, decomposing toxic substances in contaminated ground water, and eliminating nuclear waste.

  6. Computational Biology: A Strategic Initiative LDRD

    Energy Technology Data Exchange (ETDEWEB)

    Barksy, D; Colvin, M

    2002-02-07

    The goal of this Strategic Initiative LDRD project was to establish at LLNL a new core capability in computational biology, combining laboratory strengths in high performance computing, molecular biology, and computational chemistry and physics. As described in this report, this project has been very successful in achieving this goal. This success is demonstrated by the large number of referred publications, invited talks, and follow-on research grants that have resulted from this project. Additionally, this project has helped build connections to internal and external collaborators and funding agencies that will be critical to the long-term vitality of LLNL programs in computational biology. Most importantly, this project has helped establish on-going research groups in the Biology and Biotechnology Research Program, the Physics and Applied Technology Directorate, and the Computation Directorate. These groups include three laboratory staff members originally hired as post-doctoral researchers for this strategic initiative.

  7. Accelerating Clean Energy Commercialization. A Strategic Partnership Approach

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Richard [National Renewable Energy Lab. (NREL), Golden, CO (United States); Pless, Jacquelyn [Joint Institute for Strategic Energy Analysis, Golden, CO (United States); Arent, Douglas J. [Joint Institute for Strategic Energy Analysis, Golden, CO (United States); Locklin, Ken [Impax Asset Management Group (United Kingdom)

    2016-04-01

    Technology development in the clean energy and broader clean tech space has proven to be challenging. Long-standing methods for advancing clean energy technologies from science to commercialization are best known for relatively slow, linear progression through research and development, demonstration, and deployment (RDD&D); and characterized by well-known valleys of death for financing. Investment returns expected by traditional venture capital investors have been difficult to achieve, particularly for hardware-centric innovations, and companies that are subject to project finance risks. Commercialization support from incubators and accelerators has helped address these challenges by offering more support services to start-ups; however, more effort is needed to fulfill the desired clean energy future. The emergence of new strategic investors and partners in recent years has opened up innovative opportunities for clean tech entrepreneurs, and novel commercialization models are emerging that involve new alliances among clean energy companies, RDD&D, support systems, and strategic customers. For instance, Wells Fargo and Company (WFC) and the National Renewable Energy Laboratory (NREL) have launched a new technology incubator that supports faster commercialization through a focus on technology development. The incubator combines strategic financing, technology and technical assistance, strategic customer site validation, and ongoing financial support.

  8. Snowmass 2013 Computing Frontier: Accelerator Science

    CERN Document Server

    Spentzouris, P; Joshi, C; Amundson, J; An, W; Bruhwiler, D L; Cary, J R; Cowan, B; Decyk, V K; Esarey, E; Fonseca, R A; Friedman, A; Geddes, C G R; Grote, D P; Kourbanis, I; Leemans, W P; Lu, W; Mori, W B; Ng, C; Qiang, Ji; Roberts, T; Ryne, R D; Schroeder, C B; Silva, L O; Tsung, F S; Vay, J -L; Vieira, J

    2013-01-01

    This is the working summary of the Accelerator Science working group of the Computing Frontier of the Snowmass meeting 2013. It summarizes the computing requirements to support accelerator technology in both Energy and Intensity Frontiers.

  9. JGI Computing 5-Year Strategic Plan

    Energy Technology Data Exchange (ETDEWEB)

    Bader, D A; Brettin, T S; Cottingham, R W; Folta, P A; Golder, Y; Gregurick, S K; Himmel, M E; Mann, R C; Remington, K A; Slezak, T R

    2008-10-01

    A broad range of scientific goals and a similarly diverse set of consumers drive the informatics requirements and computing needs of the JGI. The scope of work in this area encompasses not only the informatics and analysis pipelines in support of the PGF sequence production, but also the integration of data from a variety of sources and sophisticated large scale analyses led by investigators within JGI and driven by the user science community. In laying out a forward looking strategy, the full range of these activities need to be examined together to build a comprehensive program that will serve as a catalyst for the DOE research community. The science landscape envisioned in the overall strategic plan calls for significantly increasing the throughput of microbial genomes sequenced to cover their phylogenetic space and building a set of finished reference plant genomes to enable DOE relevant science. Additionally, the established impact of microbial communities on global energy cycles and their potential in remediation endeavors, warrant building upon JGI's established expertise in metagenomic analysis. Not only is each of these program areas relevant and exciting in their own right, but they also can and should be undertaken in a way that allows synthesis across domains (e.g. utilize knowledge from sequence of plants and the soil from which they are grown). Both dramatic increases in the scale of genomic data collection and the synergistic potential of integrating data across domains will demand new strategies in the informatics pipeline within the JGI and in the facility's approach to computational analysis and user access to the data in aggregated form. In addition to a robust and scalable informatics infrastructure, fulfilling the strategic science goals of the JGI will require ongoing investment in usability of the data, to ensure that the data collected will be used to maximal effect. It must be recognized that 'usability' will have a

  10. Accelerating Climate Simulations Through Hybrid Computing

    Science.gov (United States)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  11. FPGA-accelerated simulation of computer systems

    CERN Document Server

    Angepat, Hari; Chung, Eric S; Hoe, James C; Chung, Eric S

    2014-01-01

    To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed f

  12. Terascale Computing in Accelerator Science and Technology

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Kwok

    2002-08-21

    We have entered the age of ''terascale'' scientific computing. Processors and system architecture both continue to evolve; hundred-teraFLOP computers are expected in the next few years, and petaFLOP computers toward the end of this decade are conceivable. This ever-increasing power to solve previously intractable numerical problems benefits almost every field of science and engineering and is revolutionizing some of them, notably including accelerator physics and technology. At existing accelerators, it will help us optimize performance, expand operational parameter envelopes, and increase reliability. Design decisions for next-generation machines will be informed by unprecedented comprehensive and accurate modeling, as well as computer-aided engineering; all this will increase the likelihood that even their most advanced subsystems can be commissioned on time, within budget, and up to specifications. Advanced computing is also vital to developing new means of acceleration and exploring the behavior of beams under extreme conditions. With continued progress it will someday become reasonable to speak of a complete numerical model of all phenomena important to a particular accelerator.

  13. Cloud computing strategic framework (FY13 - FY15).

    Energy Technology Data Exchange (ETDEWEB)

    Arellano, Lawrence R.; Arroyo, Steven C.; Giese, Gerald J.; Cox, Philip M.; Rogers, G. Kelly

    2012-11-01

    This document presents an architectural framework (plan) and roadmap for the implementation of a robust Cloud Computing capability at Sandia National Laboratories. It is intended to be a living document and serve as the basis for detailed implementation plans, project proposals and strategic investment requests.

  14. Detonation Type Ram Accelerator: A Computational Investigation

    Directory of Open Access Journals (Sweden)

    Sunil Bhat

    2000-01-01

    Full Text Available An analytical model explaining the functional characteristics of detonation type ram accelerator is presented. Major flow processes, namely, (i supersonic flow over the cone of the projectile, (ii initiation ofconical shock wave and its reflection from the tube wall, (iii supersonic combustion, and (iv expansion wave and its reflection are modelled. Taylor-Maccoll approach is adopted for modellingthe flow over the cone of the projectile. Shock reflection is treated in accordance with wave angle theorytor flows over the wedge. Prandtl-Mayer analysis is used to model the expansion wave and its reflection.Steady one-dimensional flow with heat transfer along with Rayleigh line equation for perfect gases isused to model supersonic combustion. A computer code is developed to compute the thrust producedby combustion of gases. Ballistic parameters like thrust-pressure ratio and ballistic efficiency of the accelerator are evaluated and their maximum values are 0.032 and 0.068, respectively. The code indicates possibility ofachieving high velocity of 7 km/s on utilising gaseous mixture of 2H2+O2 in the operation.Velocity range suitable for operation of the accelerator lies between 3.8 - 7.0 km/s. Maximum thrust valueis 33721 N which corresponds to the projectile velocity of 5 km/s.

  15. GPU-accelerated computation of electron transfer.

    Science.gov (United States)

    Höfinger, Siegfried; Acocella, Angela; Pop, Sergiu C; Narumi, Tetsu; Yasuoka, Kenji; Beu, Titus; Zerbetto, Francesco

    2012-11-05

    Electron transfer is a fundamental process that can be studied with the help of computer simulation. The underlying quantum mechanical description renders the problem a computationally intensive application. In this study, we probe the graphics processing unit (GPU) for suitability to this type of problem. Time-critical components are identified via profiling of an existing implementation and several different variants are tested involving the GPU at increasing levels of abstraction. A publicly available library supporting basic linear algebra operations on the GPU turns out to accelerate the computation approximately 50-fold with minor dependence on actual problem size. The performance gain does not compromise numerical accuracy and is of significant value for practical purposes. Copyright © 2012 Wiley Periodicals, Inc.

  16. Computers and Strategic Advantage: III. Games, Computer Technology, and a Strategic Power Ratio

    Science.gov (United States)

    1975-05-01

    decisionmaker were given the (r, P) quadrant as a tabula rasa and ex- pressed the same opinion about the gamble based on a rectangle, then the... role of technology, permitting the force sizes though not costs to stay constant. We take the position that each player is trying to maximize his...nonaggregated costing must play its role in strategic modeling, as it must in actual posture decisions. Costing must be done at least by classes of weapon

  17. Strategic engineering for cloud computing and big data analytics

    CERN Document Server

    Ramachandran, Muthu; Sarwar, Dilshad

    2017-01-01

    This book demonstrates the use of a wide range of strategic engineering concepts, theories and applied case studies to improve the safety, security and sustainability of complex and large-scale engineering and computer systems. It first details the concepts of system design, life cycle, impact assessment and security to show how these ideas can be brought to bear on the modeling, analysis and design of information systems with a focused view on cloud-computing systems and big data analytics. This informative book is a valuable resource for graduate students, researchers and industry-based practitioners working in engineering, information and business systems as well as strategy. .

  18. Strategic Plan for a Scientific Cloud Computing infrastructure for Europe

    CERN Document Server

    Lengert, Maryline

    2011-01-01

    Here we present the vision, concept and direction for forming a European Industrial Strategy for a Scientific Cloud Computing Infrastructure to be implemented by 2020. This will be the framework for decisions and for securing support and approval in establishing, initially, an R&D European Cloud Computing Infrastructure that serves the need of European Research Area (ERA ) and Space Agencies. This Cloud Infrastructure will have the potential beyond this initial user base to evolve to provide similar services to a broad range of customers including government and SMEs. We explain how this plan aims to support the broader strategic goals of our organisations and identify the benefits to be realised by adopting an industrial Cloud Computing model. We also outline the prerequisites and commitment needed to achieve these objectives.

  19. Strategic Implications of Cloud Computing for Modeling and Simulation (Briefing)

    Science.gov (United States)

    2016-04-01

    of Promises with Cloud • Cost efficiency • Unlimited storage • Backup and recovery • Automatic software integration • Easy access to information...discovered, at an abstract level, any advantage or disadvantage to M&S employed in a cloud infrastructure, that would not be true of any typical...Strategic Implications of Cloud Computing for Modeling and Simulation (Briefing) Amy E. Henninger I N S T I T U T E F O R D E F E N S E A N A L

  20. Strategic Cognitive Sequencing: A Computational Cognitive Neuroscience Approach

    Directory of Open Access Journals (Sweden)

    Seth A. Herd

    2013-01-01

    Full Text Available We address strategic cognitive sequencing, the “outer loop” of human cognition: how the brain decides what cognitive process to apply at a given moment to solve complex, multistep cognitive tasks. We argue that this topic has been neglected relative to its importance for systematic reasons but that recent work on how individual brain systems accomplish their computations has set the stage for productively addressing how brain regions coordinate over time to accomplish our most impressive thinking. We present four preliminary neural network models. The first addresses how the prefrontal cortex (PFC and basal ganglia (BG cooperate to perform trial-and-error learning of short sequences; the next, how several areas of PFC learn to make predictions of likely reward, and how this contributes to the BG making decisions at the level of strategies. The third models address how PFC, BG, parietal cortex, and hippocampus can work together to memorize sequences of cognitive actions from instruction (or “self-instruction”. The last shows how a constraint satisfaction process can find useful plans. The PFC maintains current and goal states and associates from both of these to find a “bridging” state, an abstract plan. We discuss how these processes could work together to produce strategic cognitive sequencing and discuss future directions in this area.

  1. Advanced Computing Tools and Models for Accelerator Physics

    Energy Technology Data Exchange (ETDEWEB)

    Ryne, Robert; Ryne, Robert D.

    2008-06-11

    This paper is based on a transcript of my EPAC'08 presentation on advanced computing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics.

  2. Molecular dynamics-based virtual screening: accelerating the drug discovery process by high-performance computing.

    Science.gov (United States)

    Ge, Hu; Wang, Yu; Li, Chanjuan; Chen, Nanhao; Xie, Yufang; Xu, Mengyan; He, Yingyan; Gu, Xinchun; Wu, Ruibo; Gu, Qiong; Zeng, Liang; Xu, Jun

    2013-10-28

    High-performance computing (HPC) has become a state strategic technology in a number of countries. One hypothesis is that HPC can accelerate biopharmaceutical innovation. Our experimental data demonstrate that HPC can significantly accelerate biopharmaceutical innovation by employing molecular dynamics-based virtual screening (MDVS). Without using HPC, MDVS for a 10K compound library with tens of nanoseconds of MD simulations requires years of computer time. In contrast, a state of the art HPC can be 600 times faster than an eight-core PC server is in screening a typical drug target (which contains about 40K atoms). Also, careful design of the GPU/CPU architecture can reduce the HPC costs. However, the communication cost of parallel computing is a bottleneck that acts as the main limit of further virtual screening improvements for drug innovations.

  3. Computational Examination of Parameters Influencing Practicability of Ram Accelerator

    Directory of Open Access Journals (Sweden)

    Sunil Bhat

    2004-07-01

    Full Text Available The problems concerning practicability aspects of a ram accelerator, such as intense in-bore projectile ablation, large accelerator tube length to achieve high projectile muzzle velocity, and high entry velocity of projectile in the accelerator tube for starting the accelerator have been examined. Computational models of the processes like phenomenon of projectile ablation, flow in the aero-window used as accelerator tube-end closure device in case of high drive gas filling pressure in the ram accelerator tube have been presented. New projectile design to minimise the starting velocity of the ram accelerator is discussed. Possibility of deployment of ram accelerator in the defence-oriented role has been investigated to utilise its high velocity potential.

  4. Community Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Spentzouris, P.; /Fermilab; Cary, J.; /Tech-X, Boulder; McInnes, L.C.; /Argonne; Mori, W.; /UCLA; Ng, C.; /SLAC; Ng, E.; Ryne, R.; /LBL, Berkeley

    2011-11-14

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization

  5. Berkeley Lab Computing Sciences: Accelerating Scientific Discovery

    OpenAIRE

    Hules, John A.

    2009-01-01

    Scientists today rely on advances in computer science, mathematics, and computational science, as well as large-scale computing and networking facilities, to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab's Computing Sciences organization researches, develops, and deploys new tools and technologies to meet these needs and to advance research in such areas as global climate change, combustion, fusion energy, nanotechnology, biology, and astrophysics.

  6. Commnity Petascale Project for Accelerator Science And Simulation: Advancing Computational Science for Future Accelerators And Accelerator Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Spentzouris, Panagiotis; /Fermilab; Cary, John; /Tech-X, Boulder; Mcinnes, Lois Curfman; /Argonne; Mori, Warren; /UCLA; Ng, Cho; /SLAC; Ng, Esmond; Ryne, Robert; /LBL, Berkeley

    2011-10-21

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  7. Commnity Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Spentzouris, Panagiotis; /Fermilab; Cary, John; /Tech-X, Boulder; Mcinnes, Lois Curfman; /Argonne; Mori, Warren; /UCLA; Ng, Cho; /SLAC; Ng, Esmond; Ryne, Robert; /LBL, Berkeley

    2008-07-01

    The design and performance optimization of particle accelerators is essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC1 Accelerator Science and Technology project, the SciDAC2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modeling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multi-physics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  8. Software Accelerates Computing Time for Complex Math

    Science.gov (United States)

    2014-01-01

    Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.

  9. Scientific computing with multicore and accelerators

    CERN Document Server

    Kurzak, Jakub; Dongarra, Jack

    2010-01-01

    Dense Linear Algebra Implementing Matrix Multiplication on the Cell B.E, Wesley Alvaro, Jakub Kurzak, and Jack DongarraImplementing Matrix Factorizations on the Cell BE, Jakub Kurzak and Jack DongarraDense Linear Algebra for Hybrid GPU-Based Systems, Stanimire Tomov and Jack DongarraBLAS for GPUs, Rajib Nath, Stanimire Tomov, and Jack DongarraSparse Linear Algebra Sparse Matrix-Vector Multiplication on Multicore and Accelerators, Samuel Williams, Nathan B

  10. Quality Function Deployment (QFD House of Quality for Strategic Planning of Computer Security of SMEs

    Directory of Open Access Journals (Sweden)

    Jorge A. Ruiz-Vanoye

    2013-01-01

    Full Text Available This article proposes to implement the Quality Function Deployment (QFD House of Quality for strategic planning of computer security for Small and Medium Enterprises (SME. The House of Quality (HoQ applied to computer security of SME is a framework to convert the security needs of corporate computing in a set of specifications to improve computer security.

  11. Accelerating Iterative Big Data Computing Through MPI

    Institute of Scientific and Technical Information of China (English)

    梁帆; 鲁小亿

    2015-01-01

    Current popular systems, Hadoop and Spark, cannot achieve satisfied performance because of the inefficient overlapping of computation and communication when running iterative big data applications. The pipeline of computing, data movement, and data management plays a key role for current distributed data computing systems. In this paper, we first analyze the overhead of shuffle operation in Hadoop and Spark when running PageRank workload, and then propose an event-driven pipeline and in-memory shuffle design with better overlapping of computation and communication as DataMPI-Iteration, an MPI-based library, for iterative big data computing. Our performance evaluation shows DataMPI-Iteration can achieve 9X∼21X speedup over Apache Hadoop, and 2X∼3X speedup over Apache Spark for PageRank and K-means.

  12. Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems

    Science.gov (United States)

    2017-04-13

    AFRL-AFOSR-UK-TR-2017-0029 Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems ...MultiCore Systems 5a. CONTRACT NUMBER FA8655-12-1-2021 5b. GRANT NUMBER Grant 12-2021 5c. PROGRAM ELEMENT NUMBER 61102F 6. AUTHOR(S...code for Heterogeneous multicore systems . The approach was based on the OmpSs programming model and the performance tools that constitute two strategic

  13. GPU-accelerated micromagnetic simulations using cloud computing

    Science.gov (United States)

    Jermain, C. L.; Rowlands, G. E.; Buhrman, R. A.; Ralph, D. C.

    2016-03-01

    Highly parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics.

  14. GPU-accelerated micromagnetic simulations using cloud computing

    CERN Document Server

    Jermain, C L; Buhrman, R A; Ralph, D C

    2015-01-01

    Highly-parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics.

  15. Computational Tools to Accelerate Commercial Development

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David C

    2013-01-01

    The goals of the work reported are: to develop new computational tools and models to enable industry to more rapidly develop and deploy new advanced energy technologies; to demonstrate the capabilities of the CCSI Toolset on non-proprietary case studies; and to deploy the CCSI Toolset to industry. Challenges of simulating carbon capture (and other) processes include: dealing with multiple scales (particle, device, and whole process scales); integration across scales; verification, validation, and uncertainty; and decision support. The tools cover: risk analysis and decision making; validated, high-fidelity CFD; high-resolution filtered sub-models; process design and optimization tools; advanced process control and dynamics; process models; basic data sub-models; and cross-cutting integration tools.

  16. Accelerating scientific computations with mixed precision algorithms

    Science.gov (United States)

    Baboulin, Marc; Buttari, Alfredo; Dongarra, Jack; Kurzak, Jakub; Langou, Julie; Langou, Julien; Luszczek, Piotr; Tomov, Stanimire

    2009-12-01

    On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented. Program summaryProgram title: ITER-REF Catalogue identifier: AECO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7211 No. of bytes in distributed program, including test data, etc.: 41 862 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: desktop, server Operating system: Unix/Linux RAM: 512 Mbytes Classification: 4.8 External routines: BLAS (optional) Nature of problem: On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. Solution method: Mixed precision algorithms stem from the observation that, in many cases, a single precision solution of a problem can be refined to the point where double precision accuracy is achieved. A common approach to the solution of linear systems, either dense or sparse, is to perform the LU

  17. Accelerated Matrix Element Method with Parallel Computing

    CERN Document Server

    Schouten, Doug; Stelzer, Bernd

    2014-01-01

    The matrix element method utilizes ab initio calculations of probability densities as powerful discriminants for processes of interest in experimental particle physics. The method has already been used successfully at previous and current collider experiments. However, the computational complexity of this method for final states with many particles and degrees of freedom sets it at a disadvantage compared to supervised classification methods such as decision trees, k nearest-neighbour, or neural networks. This note presents a concrete implementation of the matrix element technique using graphics processing units. Due to the intrinsic parallelizability of multidimensional integration, dramatic speedups can be readily achieved, which makes the matrix element technique viable for general usage at collider experiments.

  18. Large rate accelerations in antibody catalysis by strategic use of haptenic charge.

    Science.gov (United States)

    Thorn, S N; Daniels, R G; Auditor, M T; Hilvert, D

    1995-01-19

    General acid-base catalysis contributes substantially to the efficacy of many enzymes, enabling an impressive array of eliminations, isomerizations, racemizations, hydrolyses and carbon-carbon bond-forming reactions to be carried out with high rates and selectivities. The fundamental challenge of exploiting similar effects in designed catalysts such as catalytic antibodies is that of correctly positioning the catalytic groups in an appropriate active-site microenvironment. Charge complementarity between antibody and hapten (the template used to induce an antibody) has been used successfully in a number of instances to elicit acids and bases within immunoglobulin combining sites, but the activities of the catalysts obtained by this strategy are generally considerably lower than those of natural enzymes. Here we report that by optimizing hapten design and efficiently screening the immune response, antibodies can be obtained that act effectively as general base catalysts. Thus a cationic hapten correctly mimicking the transition-state geometry of all reacting bonds and bearing little resemblance to the reaction product has yielded carboxylate-containing antibodies that catalyse an E2 elimination with more than 10(3) turnovers per active site and rate accelerations of greater than 10(8). These results demonstrate that very large effects can be achieved by strategic use of haptenic charge.

  19. Neural computation and particle accelerators research, technology and applications

    CERN Document Server

    D'Arras, Horace

    2010-01-01

    This book discusses neural computation, a network or circuit of biological neurons and relatedly, particle accelerators, a scientific instrument which accelerates charged particles such as protons, electrons and deuterons. Accelerators have a very broad range of applications in many industrial fields, from high energy physics to medical isotope production. Nuclear technology is one of the fields discussed in this book. The development that has been reached by particle accelerators in energy and particle intensity has opened the possibility to a wide number of new applications in nuclear technology. This book reviews the applications in the nuclear energy field and the design features of high power neutron sources are explained. Surface treatments of niobium flat samples and superconducting radio frequency cavities by a new technique called gas cluster ion beam are also studied in detail, as well as the process of electropolishing. Furthermore, magnetic devises such as solenoids, dipoles and undulators, which ...

  20. A computer-based aid for the design of a strategic organizational culture

    OpenAIRE

    1998-01-01

    This paper presents a theoretical framework for the alignment of organizational culture and strategy by integrating knowledge from diverse areas of organizational studies including strategic human resource management, organizational culture, and the specific design of human resource practices. It then describes a computer-based aid which offers practitioners a step by step guide for improving their competitive position through the development of a "strategic" culture. It is proposed that orga...

  1. Computational Tools for Accelerating Carbon Capture Process Development

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David; Sahinidis, N V; Cozad, A; Lee, A; Kim, H; Morinelly, J; Eslick, J; Yuan, Z

    2013-06-04

    This presentation reports development of advanced computational tools to accelerate next generation technology development. These tools are to develop an optimized process using rigorous models. They include: Process Models; Simulation-Based Optimization; Optimized Process; Uncertainty Quantification; Algebraic Surrogate Models; and Superstructure Optimization (Determine Configuration).

  2. The computer-based control system of the NAC accelerator

    Science.gov (United States)

    Burdzik, G. F.; Bouckaert, R. F. A.; Cloete, I.; Dutoit, J. S.; Kohler, I. H.; Truter, J. N. J.; Visser, K.; Wikner, V. C. S. J.

    The National Accelerator Center (NAC) of the CSIR is building a two-stage accelerator which will provide charged-particle beams for use in medical and research applications. The control system for this accelerator is based on three mini-computers and a CAMAC interfacing network. Closed-loop control is being relegated to the various subsystems of the accelerators, and the computers and CAMAC network will be used in the first instance for data transfer, monitoring and servicing of the control consoles. The processing power of the computers will be utilized for automating start-up and beam-change procedures, for providing flexible and convenient information at the control consoles, for fault diagnosis and for beam-optimizing procedures. Tasks of a localized or dedicated nature are being off-loaded onto microcomputers, which are being used either in front-end devices or as slaves to the mini-computers. On the control consoles only a few instruments for setting and monitoring variables are being provided, but these instruments are universally-linkable to any appropriate machine variable.

  3. Collaborative Strategic Board Games as a Site for Distributed Computational Thinking

    Science.gov (United States)

    Berland, Matthew; Lee, Victor R.

    2011-01-01

    This paper examines the idea that contemporary strategic board games represent an informal, interactional context in which complex computational thinking takes place. When games are collaborative--that is, a game requires that players work in joint pursuit of a shared goal--the computational thinking is easily observed as distributed across…

  4. Collaborative Strategic Board Games as a Site for Distributed Computational Thinking

    Science.gov (United States)

    Berland, Matthew; Lee, Victor R.

    2011-01-01

    This paper examines the idea that contemporary strategic board games represent an informal, interactional context in which complex computational thinking takes place. When games are collaborative--that is, a game requires that players work in joint pursuit of a shared goal--the computational thinking is easily observed as distributed across…

  5. Accelerating Climate and Weather Simulations through Hybrid Computing

    Science.gov (United States)

    Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark

    2011-01-01

    Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.

  6. Accelerating Neuroimage Registration through Parallel Computation of Similarity Metric.

    Directory of Open Access Journals (Sweden)

    Yun-Gang Luo

    Full Text Available Neuroimage registration is crucial for brain morphometric analysis and treatment efficacy evaluation. However, existing advanced registration algorithms such as FLIRT and ANTs are not efficient enough for clinical use. In this paper, a GPU implementation of FLIRT with the correlation ratio (CR as the similarity metric and a GPU accelerated correlation coefficient (CC calculation for the symmetric diffeomorphic registration of ANTs have been developed. The comparison with their corresponding original tools shows that our accelerated algorithms can greatly outperform the original algorithm in terms of computational efficiency. This paper demonstrates the great potential of applying these registration tools in clinical applications.

  7. Quantum computing accelerator I/O : LDRD 52750 final report.

    Energy Technology Data Exchange (ETDEWEB)

    Schroeppel, Richard Crabtree; Modine, Normand Arthur; Ganti, Anand; Pierson, Lyndon George; Tigges, Christopher P.

    2003-12-01

    In a superposition of quantum states, a bit can be in both the states '0' and '1' at the same time. This feature of the quantum bit or qubit has no parallel in classical systems. Currently, quantum computers consisting of 4 to 7 qubits in a 'quantum computing register' have been built. Innovative algorithms suited to quantum computing are now beginning to emerge, applicable to sorting and cryptanalysis, and other applications. A framework for overcoming slightly inaccurate quantum gate interactions and for causing quantum states to survive interactions with surrounding environment is emerging, called quantum error correction. Thus there is the potential for rapid advances in this field. Although quantum information processing can be applied to secure communication links (quantum cryptography) and to crack conventional cryptosystems, the first few computing applications will likely involve a 'quantum computing accelerator' similar to a 'floating point arithmetic accelerator' interfaced to a conventional Von Neumann computer architecture. This research is to develop a roadmap for applying Sandia's capabilities to the solution of some of the problems associated with maintaining quantum information, and with getting data into and out of such a 'quantum computing accelerator'. We propose to focus this work on 'quantum I/O technologies' by applying quantum optics on semiconductor nanostructures to leverage Sandia's expertise in semiconductor microelectronic/photonic fabrication techniques, as well as its expertise in information theory, processing, and algorithms. The work will be guided by understanding of practical requirements of computing and communication architectures. This effort will incorporate ongoing collaboration between 9000, 6000 and 1000 and between junior and senior personnel. Follow-on work to fabricate and evaluate appropriate experimental nano/microstructures will be

  8. Scientific Computing Strategic Plan for the Idaho National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Whiting, Eric Todd [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory’s (INL’s) challenge and charge, and is central to INL’s ongoing success. Computing is an essential part of INL’s future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing number of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.

  9. Pennsylvania's Transition to Enterprise Computing as a Study in Strategic Alignment

    Science.gov (United States)

    Sawyer, Steve; Hinnant, Charles C.; Rizzuto, Tracey

    2008-01-01

    We theorize about the strategic alignment of computing with organizational mission, using the Commonwealth of Pennsylvania's efforts to pursue digital government initiatives as evidence. To do this we draw on a decade (1995-2004) of changes in Pennsylvania to characterize how a state government shifts from an organizational to an enterprise…

  10. Pennsylvania's Transition to Enterprise Computing as a Study in Strategic Alignment

    Science.gov (United States)

    Sawyer, Steve; Hinnant, Charles C.; Rizzuto, Tracey

    2008-01-01

    We theorize about the strategic alignment of computing with organizational mission, using the Commonwealth of Pennsylvania's efforts to pursue digital government initiatives as evidence. To do this we draw on a decade (1995-2004) of changes in Pennsylvania to characterize how a state government shifts from an organizational to an enterprise…

  11. A Quantitative Study of the Relationship between Leadership Practice and Strategic Intentions to Use Cloud Computing

    Science.gov (United States)

    Castillo, Alan F.

    2014-01-01

    The purpose of this quantitative correlational cross-sectional research study was to examine a theoretical model consisting of leadership practice, attitudes of business process outsourcing, and strategic intentions of leaders to use cloud computing and to examine the relationships between each of the variables respectively. This study…

  12. A Quantitative Study of the Relationship between Leadership Practice and Strategic Intentions to Use Cloud Computing

    Science.gov (United States)

    Castillo, Alan F.

    2014-01-01

    The purpose of this quantitative correlational cross-sectional research study was to examine a theoretical model consisting of leadership practice, attitudes of business process outsourcing, and strategic intentions of leaders to use cloud computing and to examine the relationships between each of the variables respectively. This study…

  13. Computing at DESY — current setup, trends and strategic directions

    Science.gov (United States)

    Ernst, Michael

    1998-05-01

    Since the HERA experiments H1 and ZEUS started data taking in '92, the computing environment at DESY has changed dramatically. Running a mainframe centred computing for more than 20 years, DESY switched to a heterogeneous, fully distributed computing environment within only about two years in almost every corner where computing has its applications. The computing strategy was highly influenced by the needs of the user community. The collaborations are usually limited by current technology and their ever increasing demands is the driving force for central computing to always move close to the technology edge. While DESY's central computing has a multidecade experience in running Central Data Recording/Central Data Processing for HEP experiments, the most challenging task today is to provide for clear and homogeneous concepts in the desktop area. Given that lowest level commodity hardware draws more and more attention, combined with the financial constraints we are facing already today, we quickly need concepts for integrated support of a versatile device which has the potential to move into basically any computing area in HEP. Though commercial solutions, especially addressing the PC management/support issues, are expected to come to market in the next 2-3 years, we need to provide for suitable solutions now. Buying PC's at DESY currently at a rate of about 30/month will otherwise absorb any available manpower in central computing and still will leave hundreds of unhappy people alone. Though certainly not the only region, the desktop issue is one of the most important one where we need HEP-wide collaboration to a large extent, and right now. Taking into account that there is traditionally no room for R&D at DESY, collaboration, meaning sharing experience and development resources within the HEP community, is a predominant factor for us.

  14. Computational modeling of high pressure combustion mechanism in scram accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Choi, J.Y. [Pusan Nat. Univ. (Korea); Lee, B.J. [Pusan Nat. Univ. (Korea); Agency for Defense Development, Taejon (Korea); Jeung, I.S. [Pusan Nat. Univ. (Korea); Seoul National Univ. (Korea). Dept. of Aerospace Engineering

    2000-11-01

    A computational study was carried out to analyze a high-pressure combustion in scram accelerator. Fluid dynamic modeling was based on RANS equations for reactive flows, which were solved in a fully coupled manner using a fully implicit-upwind TVD scheme. For the accurate simulation of high-pressure combustion in ram accelerator, 9-species, 25-step fully detailed reaction mechanism was incorporated with the existing CFD code previously used for the ram accelerator studies. The mechanism is based on GRI-Mech. 2.11 that includes pressure-dependent reaction rate formulations indispensable for the correct prediction of induction time in high-pressure environment. A real gas equation of state was also included to account for molecular interactions and real gas effects of high-pressure gases. The present combustion modeling is compared with previous 8-step and 19-step mechanisms with ideal gas assumption. The result shows that mixture ignition characteristics are very sensitive to the combustion mechanisms, and different mechanism results in different reactive flow-field characteristics that have a significant relevance to the operation mode and the performance of scram accelerator. (orig.)

  15. Fast acceleration of 2D wave propagation simulations using modern computational accelerators.

    Science.gov (United States)

    Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H; Kay, Matthew

    2014-01-01

    Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than 150x speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least 200x faster than the sequential implementation and 30x faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of 120x with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of

  16. Fast acceleration of 2D wave propagation simulations using modern computational accelerators.

    Directory of Open Access Journals (Sweden)

    Wei Wang

    Full Text Available Recent developments in modern computational accelerators like Graphics Processing Units (GPUs and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than 150x speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least 200x faster than the sequential implementation and 30x faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of 120x with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other

  17. Accelerating MATLAB with GPU computing a primer with examples

    CERN Document Server

    Suh, Jung W

    2013-01-01

    Beyond simulation and algorithm development, many developers increasingly use MATLAB even for product deployment in computationally heavy fields. This often demands that MATLAB codes run faster by leveraging the distributed parallelism of Graphics Processing Units (GPUs). While MATLAB successfully provides high-level functions as a simulation tool for rapid prototyping, the underlying details and knowledge needed for utilizing GPUs make MATLAB users hesitate to step into it. Accelerating MATLAB with GPUs offers a primer on bridging this gap. Starting with the basics, setting up MATLAB for

  18. A Study on Strategic Provisioning of Cloud Computing Services

    Directory of Open Access Journals (Sweden)

    Md Whaiduzzaman

    2014-01-01

    Full Text Available Cloud computing is currently emerging as an ever-changing, growing paradigm that models “everything-as-a-service.” Virtualised physical resources, infrastructure, and applications are supplied by service provisioning in the cloud. The evolution in the adoption of cloud computing is driven by clear and distinct promising features for both cloud users and cloud providers. However, the increasing number of cloud providers and the variety of service offerings have made it difficult for the customers to choose the best services. By employing successful service provisioning, the essential services required by customers, such as agility and availability, pricing, security and trust, and user metrics can be guaranteed by service provisioning. Hence, continuous service provisioning that satisfies the user requirements is a mandatory feature for the cloud user and vitally important in cloud computing service offerings. Therefore, we aim to review the state-of-the-art service provisioning objectives, essential services, topologies, user requirements, necessary metrics, and pricing mechanisms. We synthesize and summarize different provision techniques, approaches, and models through a comprehensive literature review. A thematic taxonomy of cloud service provisioning is presented after the systematic review. Finally, future research directions and open research issues are identified.

  19. A Study on Strategic Provisioning of Cloud Computing Services

    Science.gov (United States)

    Rejaul Karim Chowdhury, Md

    2014-01-01

    Cloud computing is currently emerging as an ever-changing, growing paradigm that models “everything-as-a-service.” Virtualised physical resources, infrastructure, and applications are supplied by service provisioning in the cloud. The evolution in the adoption of cloud computing is driven by clear and distinct promising features for both cloud users and cloud providers. However, the increasing number of cloud providers and the variety of service offerings have made it difficult for the customers to choose the best services. By employing successful service provisioning, the essential services required by customers, such as agility and availability, pricing, security and trust, and user metrics can be guaranteed by service provisioning. Hence, continuous service provisioning that satisfies the user requirements is a mandatory feature for the cloud user and vitally important in cloud computing service offerings. Therefore, we aim to review the state-of-the-art service provisioning objectives, essential services, topologies, user requirements, necessary metrics, and pricing mechanisms. We synthesize and summarize different provision techniques, approaches, and models through a comprehensive literature review. A thematic taxonomy of cloud service provisioning is presented after the systematic review. Finally, future research directions and open research issues are identified. PMID:25032243

  20. A study on strategic provisioning of cloud computing services.

    Science.gov (United States)

    Whaiduzzaman, Md; Haque, Mohammad Nazmul; Rejaul Karim Chowdhury, Md; Gani, Abdullah

    2014-01-01

    Cloud computing is currently emerging as an ever-changing, growing paradigm that models "everything-as-a-service." Virtualised physical resources, infrastructure, and applications are supplied by service provisioning in the cloud. The evolution in the adoption of cloud computing is driven by clear and distinct promising features for both cloud users and cloud providers. However, the increasing number of cloud providers and the variety of service offerings have made it difficult for the customers to choose the best services. By employing successful service provisioning, the essential services required by customers, such as agility and availability, pricing, security and trust, and user metrics can be guaranteed by service provisioning. Hence, continuous service provisioning that satisfies the user requirements is a mandatory feature for the cloud user and vitally important in cloud computing service offerings. Therefore, we aim to review the state-of-the-art service provisioning objectives, essential services, topologies, user requirements, necessary metrics, and pricing mechanisms. We synthesize and summarize different provision techniques, approaches, and models through a comprehensive literature review. A thematic taxonomy of cloud service provisioning is presented after the systematic review. Finally, future research directions and open research issues are identified.

  1. On-Chip Reconfigurable Hardware Accelerators for Popcount Computations

    Directory of Open Access Journals (Sweden)

    Valery Sklyarov

    2016-01-01

    Full Text Available Popcount computations are widely used in such areas as combinatorial search, data processing, statistical analysis, and bio- and chemical informatics. In many practical problems the size of initial data is very large and increase in throughput is important. The paper suggests two types of hardware accelerators that are (1 designed in FPGAs and (2 implemented in Zynq-7000 all programmable systems-on-chip with partitioning of algorithms that use popcounts between software of ARM Cortex-A9 processing system and advanced programmable logic. A three-level system architecture that includes a general-purpose computer, the problem-specific ARM, and reconfigurable hardware is then proposed. The results of experiments and comparisons with existing benchmarks demonstrate that although throughput of popcount computations is increased in FPGA-based designs interacting with general-purpose computers, communication overheads (in experiments with PCI express are significant and actual advantages can be gained if not only popcount but also other types of relevant computations are implemented in hardware. The comparison of software/hardware designs for Zynq-7000 all programmable systems-on-chip with pure software implementations in the same Zynq-7000 devices demonstrates increase in performance by a factor ranging from 5 to 19 (taking into account all the involved communication overheads between the programmable logic and the processing systems.

  2. Distance Computation Between Non-Holonomic Motions with Constant Accelerations

    Directory of Open Access Journals (Sweden)

    Enrique J. Bernabeu

    2013-09-01

    Full Text Available A method for computing the distance between two moving robots or between a mobile robot and a dynamic obstacle with linear or arc‐like motions and with constant accelerations is presented in this paper. This distance is obtained without stepping or discretizing the motions of the robots or obstacles. The robots and obstacles are modelled by convex hulls. This technique obtains the future instant in time when two moving objects will be at their minimum translational distance ‐ i.e., at their minimum separation or maximum penetration (if they will collide. This distance and the future instant in time are computed in parallel. This method is intended to be run each time new information from the world is received and, consequently, it can be used for generating collision‐free trajectories for non‐holonomic mobile robots.

  3. Creating a strategic plan for configuration management using computer aided software engineering (CASE) tools

    Energy Technology Data Exchange (ETDEWEB)

    Smith, P.R.; Sarfaty, R.

    1993-05-01

    This paper provides guidance in the definition, documentation, measurement, enhancement of processes, and validation of a strategic plan for configuration management (CM). The approach and methodology used in establishing a strategic plan is the same for any enterprise, including the Department of Energy (DOE), commercial nuclear plants, the Department of Defense (DOD), or large industrial complexes. The principles and techniques presented are used world wide by some of the largest corporations. The authors used industry knowledge and the areas of their current employment to illustrate and provide examples. Developing a strategic configuration and information management plan for DOE Idaho Field Office (DOE-ID) facilities is discussed in this paper. A good knowledge of CM principles is the key to successful strategic planning. This paper will describe and define CM elements, and discuss how CM integrates the facility`s physical configuration, design basis, and documentation. The strategic plan does not need the support of a computer aided software engineering (CASE) tool. However, the use of the CASE tool provides a methodology for consistency in approach, graphics, and database capability combined to form an encyclopedia and a method of presentation that is easily understood and aids the process of reengineering. CASE tools have much more capability than those stated above. Some examples are supporting a joint application development group (JAD) to prepare a software functional specification document and, if necessary, provide the capability to automatically generate software application code. This paper briefly discusses characteristics and capabilities of two CASE tools that use different methodologies to generate similar deliverables.

  4. Cloud Computing (SaaS Adoption as a Strategic Technology: Results of an Empirical Study

    Directory of Open Access Journals (Sweden)

    Pedro R. Palos-Sanchez

    2017-01-01

    Full Text Available The present study empirically analyzes the factors that determine the adoption of cloud computing (SaaS model in firms where this strategy is considered strategic for executing their activity. A research model has been developed to evaluate the factors that influence the intention of using cloud computing that combines the variables found in the technology acceptance model (TAM with other external variables such as top management support, training, communication, organization size, and technological complexity. Data compiled from 150 companies in Andalusia (Spain are used to test the formulated hypotheses. The results of this study reflect what critical factors should be considered and how they are interrelated. They also show the organizational demands that must be considered by those companies wishing to implement a real management model adopted to the digital economy, especially those related to cloud computing.

  5. Computation of Normal Conducting and Superconducting Linear Accelerator (LINAC) Availabilities

    Energy Technology Data Exchange (ETDEWEB)

    Haire, M.J.

    2000-07-11

    A brief study was conducted to roughly estimate the availability of a superconducting (SC) linear accelerator (LINAC) as compared to a normal conducting (NC) one. Potentially, SC radio frequency cavities have substantial reserve capability, which allows them to compensate for failed cavities, thus increasing the availability of the overall LINAC. In the initial SC design, there is a klystron and associated equipment (e.g., power supply) for every cavity of an SC LINAC. On the other hand, a single klystron may service eight cavities in the NC LINAC. This study modeled that portion of the Spallation Neutron Source LINAC (between 200 and 1,000 MeV) that is initially proposed for conversion from NC to SC technology. Equipment common to both designs was not evaluated. Tabular fault-tree calculations and computer-event-driven simulation (EDS) computer computations were performed. The estimated gain in availability when using the SC option ranges from 3 to 13% under certain equipment and conditions and spatial separation requirements. The availability of an NC LINAC is estimated to be 83%. Tabular fault-tree calculations and computer EDS modeling gave the same 83% answer to within one-tenth of a percent for the NC case. Tabular fault-tree calculations of the availability of the SC LINAC (where a klystron and associated equipment drive a single cavity) give 97%, whereas EDS computer calculations give 96%, a disagreement of only 1%. This result may be somewhat fortuitous because of limitations of tabular fault-tree calculations. For example, tabular fault-tree calculations can not handle spatial effects (separation distance between failures), equipment network configurations, and some failure combinations. EDS computer modeling of various equipment configurations were examined. When there is a klystron and associated equipment for every cavity and adjacent cavity, failure can be tolerated and the SC availability was estimated to be 96%. SC availability decreased as

  6. Computer modeling of test particle acceleration at oblique shocks

    Science.gov (United States)

    Decker, Robert B.

    1988-01-01

    The present evaluation of the basic techniques and illustrative results of charged particle-modeling numerical codes suitable for particle acceleration at oblique, fast-mode collisionless shocks emphasizes the treatment of ions as test particles, calculating particle dynamics through numerical integration along exact phase-space orbits. Attention is given to the acceleration of particles at planar, infinitessimally thin shocks, as well as to plasma simulations in which low-energy ions are injected and accelerated at quasi-perpendicular shocks with internal structure.

  7. Accelerating Computation of the Unit Commitment Problem (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Hummon, M.; Barrows, C.; Jones, W.

    2013-10-01

    Production cost models (PCMs) simulate power system operation at hourly (or higher) resolution. While computation times often extend into multiple days, the sequential nature of PCM's makes parallelism difficult. We exploit the persistence of unit commitment decisions to select partition boundaries for simulation horizon decomposition and parallel computation. Partitioned simulations are benchmarked against sequential solutions for optimality and computation time.

  8. Cloud Computing and Validated Learning for Accelerating Innovation in IoT

    Science.gov (United States)

    Suciu, George; Todoran, Gyorgy; Vulpe, Alexandru; Suciu, Victor; Bulca, Cristina; Cheveresan, Romulus

    2015-01-01

    Innovation in Internet of Things (IoT) requires more than just creation of technology and use of cloud computing or big data platforms. It requires accelerated commercialization or aptly called go-to-market processes. To successfully accelerate, companies need a new type of product development, the so-called validated learning process.…

  9. GpuCV : a GPU-accelerated framework for image processing and computer vision

    OpenAIRE

    ALLUSSE, Yannick; Horain, Patrick; Agarwal, Ankit; Saipriyadarshan, Cindula

    2008-01-01

    International audience; This paper presents briefly describes the state of the art of accelerating image processing with graphics hardware (GPU) and discusses some of its caveats. Then it describes GpuCV, an open source multi-platform library for GPU-accelerated image processing and Computer Vision operators and applications. It is meant for computer vision scientist not familiar with GPU technologies. GpuCV is designed to be compatible with the popular OpenCV library by offering GPU-accelera...

  10. Computational model of sustained acceleration effects on human cognitive performance.

    Science.gov (United States)

    McKinlly, Richard A; Gallimore, Jennie J

    2013-08-01

    Extreme acceleration maneuvers encountered in modern agile fighter aircraft can wreak havoc on human physiology, thereby significantly influencing cognitive task performance. As oxygen content declines under acceleration stress, the activity of high order cortical tissue reduces to ensure sufficient metabolic resources are available for critical life-sustaining autonomic functions. Consequently, cognitive abilities reliant on these affected areas suffer significant performance degradations. The goal was to develop and validate a model capable of predicting human cognitive performance under acceleration stress. Development began with creation of a proportional control cardiovascular model that produced predictions of several hemodynamic parameters, including eye-level blood pressure and regional cerebral oxygen saturation (rSo2). An algorithm was derived to relate changes in rSo2 within specific brain structures to performance on cognitive tasks that require engagement of different brain areas. Data from the "precision timing" experiment were then used to validate the model predicting cognitive performance as a function of G(z) profile. The following are value ranges. Results showed high agreement between the measured and predicted values for the rSo2 (correlation coefficient: 0.7483-0.8687; linear best-fit slope: 0.5760-0.9484; mean percent error: 0.75-3.33) and cognitive performance models (motion inference task--correlation coefficient: 0.7103-0.9451; linear best-fit slope: 0.7416-0.9144; mean percent error: 6.35-38.21; precision timing task--correlation coefficient: 0.6856-0.9726; linear best-fit slope: 0.5795-1.027; mean percent error: 6.30-17.28). The evidence suggests that the model is capable of accurately predicting cognitive performance of simplistic tasks under high acceleration stress.

  11. Cloud computing approaches to accelerate drug discovery value chain.

    Science.gov (United States)

    Garg, Vibhav; Arora, Suchir; Gupta, Chitra

    2011-12-01

    Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine.

  12. Accelerating patch-based directional wavelets with multicore parallel computing in compressed sensing MRI.

    Science.gov (United States)

    Li, Qiyue; Qu, Xiaobo; Liu, Yunsong; Guo, Di; Lai, Zongying; Ye, Jing; Chen, Zhong

    2015-06-01

    Compressed sensing MRI (CS-MRI) is a promising technology to accelerate magnetic resonance imaging. Both improving the image quality and reducing the computation time are important for this technology. Recently, a patch-based directional wavelet (PBDW) has been applied in CS-MRI to improve edge reconstruction. However, this method is time consuming since it involves extensive computations, including geometric direction estimation and numerous iterations of wavelet transform. To accelerate computations of PBDW, we propose a general parallelization of patch-based processing by taking the advantage of multicore processors. Additionally, two pertinent optimizations, excluding smooth patches and pre-arranged insertion sort, that make use of sparsity in MR images are also proposed. Simulation results demonstrate that the acceleration factor with the parallel architecture of PBDW approaches the number of central processing unit cores, and that pertinent optimizations are also effective to make further accelerations. The proposed approaches allow compressed sensing MRI reconstruction to be accomplished within several seconds.

  13. Acceleration of matrix element computations for precision measurements

    CERN Document Server

    Brandt, Oleg; Wang, Michael H L S; Ye, Zhenyu

    2014-01-01

    The matrix element technique provides a superior statistical sensitivity for precision measurements of important parameters at hadron colliders, such as the mass of the top quark or the cross section for the production of Higgs bosons. The main practical limitation of the technique is its high computational demand. Using the concrete example of the top quark mass, we present two approaches to reduce the computation time of the technique by two orders of magnitude. First, we utilize low-discrepancy sequences for numerical Monte Carlo integration in conjunction with a dedicated estimator of numerical uncertainty, a novelty in the context of the matrix element technique. Second, we utilize a new approach that factorizes the overall jet energy scale from the matrix element computation, a novelty in the context of top quark mass measurements. The utilization of low-discrepancy sequences is of particular general interest, as it is universally applicable to Monte Carlo integration, and independent of the computing e...

  14. Lua(Jit) for computing accelerator beam physics

    CERN Document Server

    CERN. Geneva

    2016-01-01

    As mentioned in the 2nd developers meeting, I would like to open the debate with a special presentation on another language - Lua, and a tremendous technology - LuaJit. Lua is much less known at CERN, but it is very simple, much smaller than Python and its JIT is extremely performant. The language is a dynamic scripting language easy to learn and easy to embedded in applications. I will show how we use it in HPC for accelerator beam physics as a replacement for C, C++, Fortran and Python, with some benchmarks versus Python, PyPy4 and C/C++.

  15. Accelerating Scientific Discovery Through Computation and Visualization III. Tight-Binding Wave Functions for Quantum Dots.

    Science.gov (United States)

    Sims, James S; George, William L; Griffin, Terence J; Hagedorn, John G; Hung, Howard K; Kelso, John T; Olano, Marc; Peskin, Adele P; Satterfield, Steven G; Terrill, Judith Devaney; Bryant, Garnett W; Diaz, Jose G

    2008-01-01

    This is the third in a series of articles that describe, through examples, how the Scientific Applications and Visualization Group (SAVG) at NIST has utilized high performance parallel computing, visualization, and machine learning to accelerate scientific discovery. In this article we focus on the use of high performance computing and visualization for simulations of nanotechnology.

  16. Accelerating Scientific Discovery Through Computation and Visualization III. Tight-Binding Wave Functions for Quantum Dots

    OpenAIRE

    2008-01-01

    This is the third in a series of articles that describe, through examples, how the Scientific Applications and Visualization Group (SAVG) at NIST has utilized high performance parallel computing, visualization, and machine learning to accelerate scientific discovery. In this article we focus on the use of high performance computing and visualization for simulations of nanotechnology.

  17. Accelerating Computation of DNA Sequence Alignment in Distributed Environment

    Science.gov (United States)

    Guo, Tao; Li, Guiyang; Deaton, Russel

    Sequence similarity and alignment are most important operations in computational biology. However, analyzing large sets of DNA sequence seems to be impractical on a regular PC. Using multiple threads with JavaParty mechanism, this project has successfully implemented in extending the capabilities of regular Java to a distributed environment for simulation of DNA computation. With the aid of JavaParty and the design of multiple threads, the results of this study demonstrated that the modified regular Java program could perform parallel computing without using RMI or socket communication. In this paper, an efficient method for modeling and comparing DNA sequences with dynamic programming and JavaParty was firstly proposed. Additionally, results of this method in distributed environment have been discussed.

  18. Computational algorithms for multiphase magnetohydrodynamics and applications to accelerator targets

    Directory of Open Access Journals (Sweden)

    R.V. Samulyak

    2010-01-01

    Full Text Available An interface-tracking numerical algorithm for the simulation of magnetohydrodynamic multiphase/free surface flows in the low-magnetic-Reynolds-number approximation of (Samulyak R., Du J., Glimm J., Xu Z., J. Comp. Phys., 2007, 226, 1532 is described. The algorithm has been implemented in multi-physics code FronTier and used for the simulation of MHD processes in liquids and weakly ionized plasmas. In this paper, numerical simulations of a liquid mercury jet entering strong and nonuniform magnetic field and interacting with a powerful proton pulse have been performed and compared with experiments. Such a mercury jet is a prototype of the proposed Muon Collider/Neutrino Factory, a future particle accelerator. Simulations demonstrate the elliptic distortion of the mercury jet as it enters the magnetic solenoid at a small angle to the magnetic axis, jet-surface instabilities (filamentation induced by the interaction with proton pulses, and the stabilizing effect of the magnetic field.

  19. Accelerating Missile Threat Engagement Simulations Using Personal Computer Graphics Cards

    Science.gov (United States)

    2005-03-01

    personal computer on the market today, have reached a level of power and programmability that enables them to be used as high performance stream...expected to continue at this rate for another five years, perhaps achieving tera-FLOP performance by 2005 [Mac03]. While the main, market -driven...JEFFERS // 11 nov 04 -- multiplies 1x1 scene by 8x8 reticle pallette, then does // 4:1 redux; results in RT that is quarter sized of

  20. Unified Compression-Based Acceleration of Edit-Distance Computation

    CERN Document Server

    Hermelin, Danny; Landau, Shir; Weimann, Oren

    2010-01-01

    The edit distance problem is a classical fundamental problem in computer science in general, and in combinatorial pattern matching in particular. The standard dynamic programming solution for this problem computes the edit-distance between a pair of strings of total length O(N) in O(N^2) time. To this date, this quadratic upper-bound has never been substantially improved for general strings. However, there are known techniques for breaking this bound in case the strings are known to compress well under a particular compression scheme. The basic idea is to first compress the strings, and then to compute the edit distance between the compressed strings. As it turns out, practically all known o(N^2) edit-distance algorithms work, in some sense, under the same paradigm described above. It is therefore natural to ask whether there is a single edit-distance algorithm that works for strings which are compressed under any compression scheme. A rephrasing of this question is to ask whether a single algorithm can explo...

  1. The computer simulation of laser proton acceleration for hadron therapy

    Science.gov (United States)

    Lykov, Vladimir; Baydin, Grigory

    2008-11-01

    The ions acceleration by intensive ultra-short laser pulses has interest in views of them possible applications for proton radiography, production of medical isotopes and hadron therapy. The 3D relativistic PIC-code LegoLPI is developed at RFNC-VNIITF for modeling of intensive laser interaction with plasma. The LegoLPI-code simulations were carried out to find the optimal conditions for generation of proton beams with parameters necessary for hadrons therapy. The performed simulations show that optimal for it may be two-layer foil of aluminum and polyethylene with thickness 100 nm and 50 nm accordingly. The maximum efficiency of laser energy transformation into 200 MeV protons is achieved on irradiating these foils by 30 fs laser pulse with intensity about 2.10^22 W/cm^2. The conclusion is made that lasers with peak power about 0.5-1PW and average power 0.5-1 kW are needed for generation of proton beams with parameters necessary for proton therapy.

  2. Computer control of large accelerators, design concepts and methods

    Science.gov (United States)

    Beck, F.; Gormley, M.

    1985-03-01

    Unlike most of the specialities treated in this volume, control system design is still an art, not a science. This presentation is an attempt to produce a primer for prospective practitioners of this art. A large modern accelerator requires a comprehensive control system for commissioning, machine studies, and day-to-day operation. Faced with the requirement to design a control system for such a machine, the control system architect has a bewildering array of technical devices and techniques at his disposal, and it is our aim in the following chapters to lead him through the characteristics of the problems he will have to face and the practical alternatives available for solving them. We emphasize good system architecture using commercially available hardware and software components, but in addition we discuss the actual control strategies which are to be implemented, since it is at the point of deciding what facilities shall be available that the complexity of the control system and its cost are implicitly decided.

  3. Modern hardware architectures accelerate porous media flow computations

    Science.gov (United States)

    Kulczewski, Michal; Kurowski, Krzysztof; Kierzynka, Michal; Dohnalik, Marek; Kaczmarczyk, Jan; Borujeni, Ali Takbiri

    2012-05-01

    Investigation of rock properties, porosity and permeability particularly, which determines transport media characteristic, is crucial to reservoir engineering. Nowadays, micro-tomography (micro-CT) methods allow to obtain vast of petro-physical properties. The micro-CT method facilitates visualization of pores structures and acquisition of total porosity factor, determined by sticking together 2D slices of scanned rock and applying proper absorption cut-off point. Proper segmentation of pores representation in 3D is important to solve the permeability of porous media. This factor is recently determined by the means of Computational Fluid Dynamics (CFD), a popular method to analyze problems related to fluid flows, taking advantage of numerical methods and constantly growing computing powers. The recent advent of novel multi-, many-core and graphics processing unit (GPU) hardware architectures allows scientists to benefit even more from parallel processing and built-in new features. The high level of parallel scalability offers both, the time-to-solution decrease and greater accuracy - top factors in reservoir engineering. This paper aims to present research results related to fluid flow simulations, particularly solving the total porosity and permeability of porous media, taking advantage of modern hardware architectures. In our approach total porosity is calculated by the means of general-purpose computing on multiple GPUs. This application sticks together 2D slices of scanned rock and by the means of a marching tetrahedra algorithm, creates a 3D representation of pores and calculates the total porosity. Experimental results are compared with data obtained via other popular methods, including Nuclear Magnetic Resonance (NMR), helium porosity and nitrogen permeability tests. Then CFD simulations are performed on a large-scale high performance hardware architecture to solve the flow and permeability of porous media. In our experiments we used Lattice Boltzmann

  4. Computational Tools for Accelerating Carbon Capture Process Development

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David

    2013-01-01

    The goals of the work reported are: to develop new computational tools and models to enable industry to more rapidly develop and deploy new advanced energy technologies; to demonstrate the capabilities of the CCSI Toolset on non-proprietary case studies; and to deploy the CCSI Toolset to industry. Challenges of simulating carbon capture (and other) processes include: dealing with multiple scales (particle, device, and whole process scales); integration across scales; verification, validation, and uncertainty; and decision support. The tools cover: risk analysis and decision making; validated, high-fidelity CFD; high-resolution filtered sub-models; process design and optimization tools; advanced process control and dynamics; process models; basic data sub-models; and cross-cutting integration tools.

  5. Modeling Strategic Use of Human Computer Interfaces with Novel Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Laura Jane Mariano

    2015-07-01

    Full Text Available Immersive software tools are virtual environments designed to give their users an augmented view of real-world data and ways of manipulating that data. As virtual environments, every action users make while interacting with these tools can be carefully logged, as can the state of the software and the information it presents to the user, giving these actions context. This data provides a high-resolution lens through which dynamic cognitive and behavioral processes can be viewed. In this report, we describe new methods for the analysis and interpretation of such data, utilizing a novel implementation of the Beta Process Hidden Markov Model (BP-HMM for analysis of software activity logs. We further report the results of a preliminary study designed to establish the validity of our modeling approach. A group of 20 participants were asked to play a simple computer game, instrumented to log every interaction with the interface. Participants had no previous experience with the game’s functionality or rules, so the activity logs collected during their naïve interactions capture patterns of exploratory behavior and skill acquisition as they attempted to learn the rules of the game. Pre- and post-task questionnaires probed for self-reported styles of problem solving, as well as task engagement, difficulty, and workload. We jointly modeled the activity log sequences collected from all participants using the BP-HMM approach, identifying a global library of activity patterns representative of the collective behavior of all the participants. Analyses show systematic relationships between both pre- and post-task questionnaires, self-reported approaches to analytic problem solving, and metrics extracted from the BP-HMM decomposition. Overall, we find that this novel approach to decomposing unstructured behavioral data within software environments provides a sensible means for understanding how users learn to integrate software functionality for strategic

  6. Accelerate!

    Science.gov (United States)

    Kotter, John P

    2012-11-01

    The old ways of setting and implementing strategy are failing us, writes the author of Leading Change, in part because we can no longer keep up with the pace of change. Organizational leaders are torn between trying to stay ahead of increasingly fierce competition and needing to deliver this year's results. Although traditional hierarchies and managerial processes--the components of a company's "operating system"--can meet the daily demands of running an enterprise, they are rarely equipped to identify important hazards quickly, formulate creative strategic initiatives nimbly, and implement them speedily. The solution Kotter offers is a second system--an agile, networklike structure--that operates in concert with the first to create a dual operating system. In such a system the hierarchy can hand off the pursuit of big strategic initiatives to the strategy network, freeing itself to focus on incremental changes to improve efficiency. The network is populated by employees from all levels of the organization, giving it organizational knowledge, relationships, credibility, and influence. It can Liberate information from silos with ease. It has a dynamic structure free of bureaucratic layers, permitting a level of individualism, creativity, and innovation beyond the reach of any hierarchy. The network's core is a guiding coalition that represents each level and department in the hierarchy, with a broad range of skills. Its drivers are members of a "volunteer army" who are energized by and committed to the coalition's vividly formulated, high-stakes vision and strategy. Kotter has helped eight organizations, public and private, build dual operating systems over the past three years. He predicts that such systems will lead to long-term success in the 21st century--for shareholders, customers, employees, and companies themselves.

  7. Accelerating Computation of Large Biological Datasets using MapReduce Framework.

    Science.gov (United States)

    Wang, Chao; Dai, Dong; Li, Xi; Wang, Aili; Zhou, Xuehai

    2016-04-05

    The maximal information coefficient (MIC) has been proposed to discover relationships and associations between pairs of variables. It poses significant challenges for bioinformatics scientists to accelerate the MIC calculation, especially in genome sequencing and biological annotations. In this paper we explore a parallel approach which uses MapReduce framework to improve the computing efficiency and throughput of the MIC computation. The acceleration system includes biological data storage on HDFS, preprocessing algorithms, distributed memory cache mechanism, and the partition of MapReduce jobs. Based on the acceleration approach, we extend the traditional two-variable algorithm to multiple variables algorithm. The experimental results show that our parallel solution provides a linear speedup comparing with original algorithm without affecting the correctness and sensitivity.

  8. Adaptation and optimization of basic operations for an unstructured mesh CFD algorithm for computation on massively parallel accelerators

    Science.gov (United States)

    Bogdanov, P. B.; Gorobets, A. V.; Sukov, S. A.

    2013-08-01

    The design of efficient algorithms for large-scale gas dynamics computations with hybrid (heterogeneous) computing systems whose high performance relies on massively parallel accelerators is addressed. A high-order accurate finite volume algorithm with polynomial reconstruction on unstructured hybrid meshes is used to compute compressible gas flows in domains of complex geometry. The basic operations of the algorithm are implemented in detail for massively parallel accelerators, including AMD and NVIDIA graphics processing units (GPUs). Major optimization approaches and a computation transfer technique are covered. The underlying programming tool is the Open Computing Language (OpenCL) standard, which performs on accelerators of various architectures, both existing and emerging.

  9. Improving Quality of Service and Reducing Power Consumption with WAN accelerator in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    Shin-ichi Kuribayashi

    2013-02-01

    Full Text Available The widespread use of cloud computing services is expected to deteriorate a Quality of Service andtoincrease the power consumption of ICT devices, since the distance to a server becomes longer thanbefore. Migration of virtual machines over a wide area can solve many problems such as load balancingand power saving in cloud computing environments.This paper proposes to dynamically apply WAN accelerator within the network when a virtual machine ismoved to a distant center, in order to prevent the degradation in performance after live migration ofvirtual machines over a wide area. mSCTP-based data transfer using different TCP connections beforeand after migration is proposed in order to use a currently available WAN accelerator. This paper doesnot consider the performance degradation of live migration itself. Then, this paper proposes to reduce thepower consumption of ICT devices, which consists of installing WAN accelerators as part of cloudresources actively and increasing the packet transfer rate of communication link temporarily. It isdemonstrated that the power consumption with WAN accelerator could be reduced to one-tenth of thatwithout WAN accelerator.

  10. Ultrasound window-modulated compounding Nakagami imaging: Resolution improvement and computational acceleration for liver characterization.

    Science.gov (United States)

    Ma, Hsiang-Yang; Lin, Ying-Hsiu; Wang, Chiao-Yin; Chen, Chiung-Nien; Ho, Ming-Chih; Tsui, Po-Hsiang

    2016-08-01

    Ultrasound Nakagami imaging is an attractive method for visualizing changes in envelope statistics. Window-modulated compounding (WMC) Nakagami imaging was reported to improve image smoothness. The sliding window technique is typically used for constructing ultrasound parametric and Nakagami images. Using a large window overlap ratio may improve the WMC Nakagami image resolution but reduces computational efficiency. Therefore, the objectives of this study include: (i) exploring the effects of the window overlap ratio on the resolution and smoothness of WMC Nakagami images; (ii) proposing a fast algorithm that is based on the convolution operator (FACO) to accelerate WMC Nakagami imaging. Computer simulations and preliminary clinical tests on liver fibrosis samples (n=48) were performed to validate the FACO-based WMC Nakagami imaging. The results demonstrated that the width of the autocorrelation function and the parameter distribution of the WMC Nakagami image reduce with the increase in the window overlap ratio. One-pixel shifting (i.e., sliding the window on the image data in steps of one pixel for parametric imaging) as the maximum overlap ratio significantly improves the WMC Nakagami image quality. Concurrently, the proposed FACO method combined with a computational platform that optimizes the matrix computation can accelerate WMC Nakagami imaging, allowing the detection of liver fibrosis-induced changes in envelope statistics. FACO-accelerated WMC Nakagami imaging is a new-generation Nakagami imaging technique with an improved image quality and fast computation.

  11. Experimental, Theoretical and Computational Studies of Plasma-Based Concepts for Future High Energy Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Joshi, Chan [Univ. of California, Los Angeles, CA (United States); Mori, W. [Univ. of California, Los Angeles, CA (United States)

    2013-10-21

    This is the final report on the DOE grant number DE-FG02-92ER40727 titled, “Experimental, Theoretical and Computational Studies of Plasma-Based Concepts for Future High Energy Accelerators.” During this grant period the UCLA program on Advanced Plasma Based Accelerators, headed by Professor C. Joshi has made many key scientific advances and trained a generation of students, many of whom have stayed in this research field and even started research programs of their own. In this final report however, we will focus on the last three years of the grant and report on the scientific progress made in each of the four tasks listed under this grant. Four tasks are focused on: Plasma Wakefield Accelerator Research at FACET, SLAC National Accelerator Laboratory, In House Research at UCLA’s Neptune and 20 TW Laser Laboratories, Laser-Wakefield Acceleration (LWFA) in Self Guided Regime: Experiments at the Callisto Laser at LLNL, and Theory and Simulations. Major scientific results have been obtained in each of the four tasks described in this report. These have led to publications in the prestigious scientific journals, graduation and continued training of high quality Ph.D. level students and have kept the U.S. at the forefront of plasma-based accelerators research field.

  12. Effects of different computer typing speeds on acceleration and peak contact pressure of the fingertips during computer typing.

    Science.gov (United States)

    Yoo, Won-Gyu

    2015-01-01

    [Purpose] This study showed the effects of different computer typing speeds on acceleration and peak contact pressure of the fingertips during computer typing. [Subjects] Twenty-one male computer workers voluntarily consented to participate in this study. They consisted of 7 workers who could type 200-300 characteristics/minute, 7 workers who could type 300-400 characteristics/minute, and 7 workers who could type 400-500 chracteristics/minute. [Methods] This study was used to measure the acceleration and peak contact pressure of the fingertips for different typing speed groups using an accelerometer and CONFORMat system. [Results] The fingertip contact pressure was increased in the high typing speed group compared with the low and medium typing speed groups. The fingertip acceleration was increased in the high typing speed group compared with the low and medium typing speed groups. [Conclusion] The results of the present study indicate that a fast typing speed cause continuous pressure stress to be applied to the fingers, thereby creating pain in the fingers.

  13. Acceleration of Radiance for Lighting Simulation by Using Parallel Computing with OpenCL

    Energy Technology Data Exchange (ETDEWEB)

    Zuo, Wangda; McNeil, Andrew; Wetter, Michael; Lee, Eleanor

    2011-09-06

    We report on the acceleration of annual daylighting simulations for fenestration systems in the Radiance ray-tracing program. The algorithm was optimized to reduce both the redundant data input/output operations and the floating-point operations. To further accelerate the simulation speed, the calculation for matrix multiplications was implemented using parallel computing on a graphics processing unit. We used OpenCL, which is a cross-platform parallel programming language. Numerical experiments show that the combination of the above measures can speed up the annual daylighting simulations 101.7 times or 28.6 times when the sky vector has 146 or 2306 elements, respectively.

  14. Acceleration of the matrix multiplication of Radiance three phase daylighting simulations with parallel computing on heterogeneous hardware of personal computer

    Energy Technology Data Exchange (ETDEWEB)

    Zuo, Wangda [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wetter, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lee, Eleanor S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2013-05-23

    Building designers are increasingly relying on complex fenestration systems to reduce energy consumed for lighting and HVAC in low energy buildings. Radiance, a lighting simulation program, has been used to conduct daylighting simulations for complex fenestration systems. Depending on the configurations, the simulation can take hours or even days using a personal computer. This paper describes how to accelerate the matrix multiplication portion of a Radiance three-phase daylight simulation by conducting parallel computing on heterogeneous hardware of a personal computer. The algorithm was optimized and the computational part was implemented in parallel using OpenCL. The speed of new approach was evaluated using various daylighting simulation cases on a multicore central processing unit and a graphics processing unit. Based on the measurements and analysis of the time usage for the Radiance daylighting simulation, further speedups can be achieved by using fast I/O devices and storing the data in a binary format.

  15. Accelerated multidimensional radiofrequency pulse design for parallel transmission using concurrent computation on multiple graphics processing units.

    Science.gov (United States)

    Deng, Weiran; Yang, Cungeng; Stenger, V Andrew

    2011-02-01

    Multidimensional radiofrequency (RF) pulses are of current interest because of their promise for improving high-field imaging and for optimizing parallel transmission methods. One major drawback is that the computation time of numerically designed multidimensional RF pulses increases rapidly with their resolution and number of transmitters. This is critical because the construction of multidimensional RF pulses often needs to be in real time. The use of graphics processing units for computations is a recent approach for accelerating image reconstruction applications. We propose the use of graphics processing units for the design of multidimensional RF pulses including the utilization of parallel transmitters. Using a desktop computer with four NVIDIA Tesla C1060 computing processors, we found acceleration factors on the order of 20 for standard eight-transmitter two-dimensional spiral RF pulses with a 64 × 64 excitation resolution and a 10-μsec dwell time. We also show that even greater acceleration factors can be achieved for more complex RF pulses. Copyright © 2010 Wiley-Liss, Inc.

  16. Convergence acceleration for vector sequences and applications to computational fluid dynamics

    Science.gov (United States)

    Sidi, Avram; Celestina, Mark L.

    1990-01-01

    Some recent developments in acceleration of convergence methods for vector sequences are reviewed. The methods considered are the minimal polynomial extrapolation, the reduced rank extrapolation, and the modified minimal polynomial extrapolation. The vector sequences to be accelerated are those that are obtained from the iterative solution of linear or nonlinear systems of equations. The convergence and stability properties of these methods as well as different ways of numerical implementation are discussed in detail. Based on the convergence and stability results, strategies that are useful in practical applications are suggested. Two applications to computational fluid mechanics involving the three dimensional Euler equations for ducted and external flows are considered. The numerical results demonstrate the usefulness of the methods in accelerating the convergence of the time marching techniques in the solution of steady state problems.

  17. A contribution to the computation of the impedance in acceleration resonators

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Cong

    2016-05-15

    This thesis is focusing on the numerical computation of the impedance in acceleration resonators and corresponding components. For this purpose, a dedicated solver based on the Finite Element Method (FEM) has been developed to compute the broadband impedance in accelerating components. In addition, various numerical approaches have been used to calculate the narrow-band impedance in superconducting radio frequency (RF) cavities. From that an overview of the calculated results as well as the comparisons between the applied numerical approaches is provided. During the design phase of superconducting RF accelerating cavities and components, a challenging and difficult task is the determination of the impedance inside the accelerators with the help of proper computer simulations. Impedance describes the electromagnetic interaction between the particle beam and the accelerators. It can affect the stability of the particle beam. For a superconducting RF accelerating cavity with waveguides (beam pipes and couplers), the narrow-band impedance, which is also called shunt impedance, corresponds to the eigenmodes of the cavity. It depends on the eigenfrequencies and its electromagnetic field distribution of the eigenmodes inside the cavity. On the other hand, the broadband impedance describes the interaction of the particle beam in the waveguides with its environment at arbitrary frequency and beam velocity. With the narrow-band and broadband impedance the detailed knowledges of the impedance for the accelerators can be given completely. In order to calculate the broadband longitudinal space charge impedance for acceleration components, a three-dimensional (3D) solver based on the FEM in frequency domain has been developed. To calculate the narrow-band impedance for superconducting RF cavities, we used various numerical approaches. Firstly, the eigenmode solver based on Finite Integration Technique (FIT) and a parallel real-valued FEM (CEM3Dr) eigenmode solver based on

  18. FINAL REPORT DE-FG02-04ER41317 Advanced Computation and Chaotic Dynamics for Beams and Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Cary, John R [U. Colorado

    2014-09-08

    During the year ending in August 2013, we continued to investigate the potential of photonic crystal (PhC) materials for acceleration purposes. We worked to characterize acceleration ability of simple PhC accelerator structures, as well as to characterize PhC materials to determine whether current fabrication techniques can meet the needs of future accelerating structures. We have also continued to design and optimize PhC accelerator structures, with the ultimate goal of finding a new kind of accelerator structure that could offer significant advantages over current RF acceleration technology. This design and optimization of these requires high performance computation, and we continue to work on methods to make such computation faster and more efficient.

  19. Fast crustal deformation computing method for multiple computations accelerated by a graphics processing unit cluster

    Science.gov (United States)

    Yamaguchi, Takuma; Ichimura, Tsuyoshi; Yagi, Yuji; Agata, Ryoichiro; Hori, Takane; Hori, Muneo

    2017-08-01

    As high-resolution observational data become more common, the demand for numerical simulations of crustal deformation using 3-D high-fidelity modelling is increasing. To increase the efficiency of performing numerical simulations with high computation costs, we developed a fast solver using heterogeneous computing, with graphics processing units (GPUs) and central processing units, and then used the solver in crustal deformation computations. The solver was based on an iterative solver and was devised so that a large proportion of the computation was calculated more quickly using GPUs. To confirm the utility of the proposed solver, we demonstrated a numerical simulation of the coseismic slip distribution estimation, which requires 360 000 crustal deformation computations with 82 196 106 degrees of freedom.

  20. Proceedings of the conference on computer codes and the linear accelerator community

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, R.K. (comp.)

    1990-07-01

    The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned.

  1. A Low-Power Scalable Stream Compute Accelerator for General Matrix Multiply (GEMM

    Directory of Open Access Journals (Sweden)

    Antony Savich

    2014-01-01

    play an important role in determining the performance of such applications. This paper proposes a novel efficient, highly scalable hardware accelerator that is of equivalent performance to a 2 GHz quad core PC but can be used in low-power applications targeting embedded systems requiring high performance computation. Power, performance, and resource consumption are demonstrated on a fully-functional prototype. The proposed hardware accelerator is 36× more energy efficient per unit of computation compared to state-of-the-art Xeon processor of equal vintage and is 14× more efficient as a stand-alone platform with equivalent performance. An important comparison between simulated system estimates and real system performance is carried out.

  2. Performance analysis and acceleration of explicit integration for large kinetic networks using batched GPU computations

    Energy Technology Data Exchange (ETDEWEB)

    Shyles, Daniel [University of Tennessee (UT); Dongarra, Jack J. [University of Tennessee, Knoxville (UTK); Guidry, Mike W. [ORNL; Tomov, Stanimire Z. [ORNL; Billings, Jay Jay [ORNL; Brock, Benjamin A. [ORNL; Haidar Ahmad, Azzam A. [ORNL

    2016-09-01

    Abstract—We demonstrate the systematic implementation of recently-developed fast explicit kinetic integration algorithms that solve efficiently N coupled ordinary differential equations (subject to initial conditions) on modern GPUs. We take representative test cases (Type Ia supernova explosions) and demonstrate two or more orders of magnitude increase in efficiency for solving such systems (of realistic thermonuclear networks coupled to fluid dynamics). This implies that important coupled, multiphysics problems in various scientific and technical disciplines that were intractable, or could be simulated only with highly schematic kinetic networks, are now computationally feasible. As examples of such applications we present the computational techniques developed for our ongoing deployment of these new methods on modern GPU accelerators. We show that similarly to many other scientific applications, ranging from national security to medical advances, the computation can be split into many independent computational tasks, each of relatively small-size. As the size of each individual task does not provide sufficient parallelism for the underlying hardware, especially for accelerators, these tasks must be computed concurrently as a single routine, that we call batched routine, in order to saturate the hardware with enough work.

  3. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.

    Science.gov (United States)

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-04-07

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  4. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing

    Directory of Open Access Journals (Sweden)

    Fan Zhang

    2016-04-01

    Full Text Available With the development of synthetic aperture radar (SAR technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO. However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  5. BioEM: GPU-accelerated computing of Bayesian inference of electron microscopy images

    CERN Document Server

    Cossio, Pilar; Baruffa, Fabio; Rampp, Markus; Lindenstruth, Volker; Hummer, Gerhard

    2016-01-01

    In cryo-electron microscopy (EM), molecular structures are determined from large numbers of projection images of individual particles. To harness the full power of this single-molecule information, we use the Bayesian inference of EM (BioEM) formalism. By ranking structural models using posterior probabilities calculated for individual images, BioEM in principle addresses the challenge of working with highly dynamic or heterogeneous systems not easily handled in traditional EM reconstruction. However, the calculation of these posteriors for large numbers of particles and models is computationally demanding. Here we present highly parallelized, GPU-accelerated computer software that performs this task efficiently. Our flexible formulation employs CUDA, OpenMP, and MPI parallelization combined with both CPU and GPU computing. The resulting BioEM software scales nearly ideally both on pure CPU and on CPU+GPU architectures, thus enabling Bayesian analysis of tens of thousands of images in a reasonable time. The g...

  6. Accelerating Development of EV Batteries Through Computer-Aided Engineering (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Pesaran, A.; Kim, G. H.; Smith, K.; Santhanagopalan, S.

    2012-12-01

    The Department of Energy's Vehicle Technology Program has launched the Computer-Aided Engineering for Automotive Batteries (CAEBAT) project to work with national labs, industry and software venders to develop sophisticated software. As coordinator, NREL has teamed with a number of companies to help improve and accelerate battery design and production. This presentation provides an overview of CAEBAT, including its predictive computer simulation of Li-ion batteries known as the Multi-Scale Multi-Dimensional (MSMD) model framework. MSMD's modular, flexible architecture connects the physics of battery charge/discharge processes, thermal control, safety and reliability in a computationally efficient manner. This allows independent development of submodels at the cell and pack levels.

  7. Concepts and techniques: Active electronics and computers in safety-critical accelerator operation

    Energy Technology Data Exchange (ETDEWEB)

    Frankel, R.S.

    1995-12-31

    The Relativistic Heavy Ion Collider (RHIC) under construction at Brookhaven National Laboratory, requires an extensive Access Control System to protect personnel from Radiation, Oxygen Deficiency and Electrical hazards. In addition, the complicated nature of operation of the Collider as part of a complex of other Accelerators necessitates the use of active electronic measurement circuitry to ensure compliance with established Operational Safety Limits. Solutions were devised which permit the use of modern computer and interconnections technology for Safety-Critical applications, while preserving and enhancing, tried and proven protection methods. In addition a set of Guidelines, regarding required performance for Accelerator Safety Systems and a Handbook of design criteria and rules were developed to assist future system designers and to provide a framework for internal review and regulation.

  8. Accelerating Relevance-Vector-Machine-Based Classification of Hyperspectral Image with Parallel Computing

    Directory of Open Access Journals (Sweden)

    Chao Dong

    2012-01-01

    Full Text Available Benefiting from the kernel skill and the sparse property, the relevance vector machine (RVM could acquire a sparse solution, with an equivalent generalization ability compared with the support vector machine. The sparse property requires much less time in the prediction, making RVM potential in classifying the large-scale hyperspectral image. However, RVM is not widespread influenced by its slow training procedure. To solve the problem, the classification of the hyperspectral image using RVM is accelerated by the parallel computing technique in this paper. The parallelization is revealed from the aspects of the multiclass strategy, the ensemble of multiple weak classifiers, and the matrix operations. The parallel RVMs are implemented using the C language plus the parallel functions of the linear algebra packages and the message passing interface library. The proposed methods are evaluated by the AVIRIS Indian Pines data set on the Beowulf cluster and the multicore platforms. It shows that the parallel RVMs accelerate the training procedure obviously.

  9. Accelerating image registration of MRI by GPU-based parallel computation.

    Science.gov (United States)

    Huang, Teng-Yi; Tang, Yu-Wei; Ju, Shiun-Ying

    2011-06-01

    Automatic image registration for MRI applications generally requires many iteration loops and is, therefore, a time-consuming task. This drawback prolongs data analysis and delays the workflow of clinical routines. Recent advances in the massively parallel computation of graphic processing units (GPUs) may be a solution to this problem. This study proposes a method to accelerate registration calculations, especially for the popular statistical parametric mapping (SPM) system. This study reimplemented the image registration of SPM system to achieve an approximately 14-fold increase in speed in registering single-modality intrasubject data sets. The proposed program is fully compatible with SPM, allowing the user to simply replace the original image registration library of SPM to gain the benefit of the computation power provided by commodity graphic processors. In conclusion, the GPU computation method is a practical way to accelerate automatic image registration. This technology promises a broader scope of application in the field of image registration. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Computational Materials Science and Chemistry: Accelerating Discovery and Innovation through Simulation-Based Engineering and Science

    Energy Technology Data Exchange (ETDEWEB)

    Crabtree, George [Argonne National Lab. (ANL), Argonne, IL (United States); Glotzer, Sharon [University of Michigan; McCurdy, Bill [University of California Davis; Roberto, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2010-07-26

    This report is based on a SC Workshop on Computational Materials Science and Chemistry for Innovation on July 26-27, 2010, to assess the potential of state-of-the-art computer simulations to accelerate understanding and discovery in materials science and chemistry, with a focus on potential impacts in energy technologies and innovation. The urgent demand for new energy technologies has greatly exceeded the capabilities of today's materials and chemical processes. To convert sunlight to fuel, efficiently store energy, or enable a new generation of energy production and utilization technologies requires the development of new materials and processes of unprecedented functionality and performance. New materials and processes are critical pacing elements for progress in advanced energy systems and virtually all industrial technologies. Over the past two decades, the United States has developed and deployed the world's most powerful collection of tools for the synthesis, processing, characterization, and simulation and modeling of materials and chemical systems at the nanoscale, dimensions of a few atoms to a few hundred atoms across. These tools, which include world-leading x-ray and neutron sources, nanoscale science facilities, and high-performance computers, provide an unprecedented view of the atomic-scale structure and dynamics of materials and the molecular-scale basis of chemical processes. For the first time in history, we are able to synthesize, characterize, and model materials and chemical behavior at the length scale where this behavior is controlled. This ability is transformational for the discovery process and, as a result, confers a significant competitive advantage. Perhaps the most spectacular increase in capability has been demonstrated in high performance computing. Over the past decade, computational power has increased by a factor of a million due to advances in hardware and software. This rate of improvement, which shows no sign of

  11. Proposing a Strategic Framework for Distributed Manufacturing Execution System Using Cloud Computing

    Directory of Open Access Journals (Sweden)

    Shiva Khalili Gheidari

    2013-07-01

    Full Text Available This paper introduces a strategic framework that uses service-oriented architecture to design distributed MES over cloud. In this study, the main structure of framework is defined in terms of a series of modules that communicate with each other by use of a design pattern, called mediator. Framework focus is on the main module, which handles distributed orders with other ones and finally suggests the benefit of using cloud in comparison with previous architectures. The main structure of framework (mediator and the benefit of focusing on the main module by using cloud, should be pointed more, also the aim and the results of comparing this method with previous architecture whether by quality and quantity is not described.

  12. Computer simulation of 2-D and 3-D ion beam extraction and acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Ido, Shunji; Nakajima, Yuji [Saitama Univ., Urawa (Japan). Faculty of Engineering

    1997-03-01

    The two-dimensional code and the three-dimensional code have been developed to study the physical features of the ion beams in the extraction and acceleration stages. By using the two-dimensional code, the design of first electrode(plasma grid) is examined in regard to the beam divergence. In the computational studies by using the three-dimensional code, the axis-off model of ion beam is investigated. It is found that the deflection angle of ion beam is proportional to the gap displacement of the electrodes. (author)

  13. Computational Materials Science and Chemistry: Accelerating Discovery and Innovation through Simulation-Based Engineering and Science

    Energy Technology Data Exchange (ETDEWEB)

    Crabtree, George [Argonne National Lab. (ANL), Argonne, IL (United States); Glotzer, Sharon [University of Michigan; McCurdy, Bill [University of California Davis; Roberto, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2010-07-26

    This report is based on a SC Workshop on Computational Materials Science and Chemistry for Innovation on July 26-27, 2010, to assess the potential of state-of-the-art computer simulations to accelerate understanding and discovery in materials science and chemistry, with a focus on potential impacts in energy technologies and innovation. The urgent demand for new energy technologies has greatly exceeded the capabilities of today's materials and chemical processes. To convert sunlight to fuel, efficiently store energy, or enable a new generation of energy production and utilization technologies requires the development of new materials and processes of unprecedented functionality and performance. New materials and processes are critical pacing elements for progress in advanced energy systems and virtually all industrial technologies. Over the past two decades, the United States has developed and deployed the world's most powerful collection of tools for the synthesis, processing, characterization, and simulation and modeling of materials and chemical systems at the nanoscale, dimensions of a few atoms to a few hundred atoms across. These tools, which include world-leading x-ray and neutron sources, nanoscale science facilities, and high-performance computers, provide an unprecedented view of the atomic-scale structure and dynamics of materials and the molecular-scale basis of chemical processes. For the first time in history, we are able to synthesize, characterize, and model materials and chemical behavior at the length scale where this behavior is controlled. This ability is transformational for the discovery process and, as a result, confers a significant competitive advantage. Perhaps the most spectacular increase in capability has been demonstrated in high performance computing. Over the past decade, computational power has increased by a factor of a million due to advances in hardware and software. This rate of improvement, which shows no sign of

  14. Semiempirical Quantum Chemical Calculations Accelerated on a Hybrid Multicore CPU-GPU Computing Platform.

    Science.gov (United States)

    Wu, Xin; Koslowski, Axel; Thiel, Walter

    2012-07-10

    In this work, we demonstrate that semiempirical quantum chemical calculations can be accelerated significantly by leveraging the graphics processing unit (GPU) as a coprocessor on a hybrid multicore CPU-GPU computing platform. Semiempirical calculations using the MNDO, AM1, PM3, OM1, OM2, and OM3 model Hamiltonians were systematically profiled for three types of test systems (fullerenes, water clusters, and solvated crambin) to identify the most time-consuming sections of the code. The corresponding routines were ported to the GPU and optimized employing both existing library functions and a GPU kernel that carries out a sequence of noniterative Jacobi transformations during pseudodiagonalization. The overall computation times for single-point energy calculations and geometry optimizations of large molecules were reduced by one order of magnitude for all methods, as compared to runs on a single CPU core.

  15. A Unified Algorithm for Accelerating Edit-Distance Computation via Text-Compression

    CERN Document Server

    Hermelin, Danny; Landau, Shir; Weimann, Oren

    2009-01-01

    We present a unified framework for accelerating edit-distance computation between two compressible strings using straight-line programs. For two strings of total length $N$ having straight-line program representations of total size $n$, we provide an algorithm running in $O(n^{1.4}N^{1.2})$ time for computing the edit-distance of these two strings under any rational scoring function, and an $O(n^{1.34}N^{1.34})$ time algorithm for arbitrary scoring functions. This improves on a recent algorithm of Tiskin that runs in $O(nN^{1.5})$ time, and works only for rational scoring functions. Also, in the last part of the paper, we show how the classical four-russians technique can be incorporated into our SLP edit-distance scheme, giving us a simple $\\Omega(\\lg N)$ speed-up in the case of arbitrary scoring functions, for any pair of strings.

  16. Computationally efficient methods for modelling laser wakefield acceleration in the blowout regime

    CERN Document Server

    Cowan, B M; Beck, A; Davoine, X; Bunkers, K; Lifschitz, A F; Lefebvre, E; Bruhwiler, D L; Shadwick, B A; Umstadter, D P

    2012-01-01

    Electron self-injection and acceleration until dephasing in the blowout regime is studied for a set of initial conditions typical of recent experiments with 100 terawatt-class lasers. Two different approaches to computationally efficient, fully explicit, three-dimensional particle-in-cell modelling are examined. First, the Cartesian code VORPAL using a perfect-dispersion electromagnetic solver precisely describes the laser pulse and bubble dynamics, taking advantage of coarser resolution in the propagation direction, with a proportionally larger time step. Using third-order splines for macroparticles helps suppress the sampling noise while keeping the usage of computational resources modest. The second way to reduce the simulation load is using reduced-geometry codes. In our case, the quasi-cylindrical code CALDER-CIRC uses decomposition of fields and currents into a set of poloidal modes, while the macroparticles move in the Cartesian 3D space. Cylindrical symmetry of the interaction allows using just two mo...

  17. Historic Seismicity, Computed Peak Ground Accelerations, and Seismic Site Conditions for Northeast Mexico

    Science.gov (United States)

    Montalvo-Arriet, J. C.; Galván-Ramírez, I. N.; Ramos-Zuñiga, L. G.; Navarro de León, I.; Ramírez-Fernández, J. A.; Quintanilla-López, Y.; Cavazos-Tovar, N. P.

    2007-05-01

    In this study we present the historic seismicity, computed peak ground accelerations, and mapping of seismic site conditions for northeast Mexico. We start with a compilation of the regional seismicity in northeast Mexico (24- 31°N, 87-106°W) for the 1787-2006 period. Our study area lies within three morphotectonic provinces: Basin and Range and Rio Grande rift, Sierra Madre Oriental and Gulf Coastal Plain. Peak ground acceleration (PGA) maps were computed for three different scenarios: 1928 Parral, Chihuahua (MW = 6.5); 1931 Valentine, Texas (MW = 6.4); and a hypothetical earthquake located in central Coahuila (MW = 6.5). Ground acceleration values were computed using attenuation relations developed for central and eastern North America and the Basin and Range province. The hypothetical earthquake in central Coahuila is considered a critical scenario for the main cities of northeast Mexico. The damage associated with this hypothetical earthquake could be severe because the majority of the buildings were constructed without allowance for seismic accelerations. The expected PGA values in Monterrey, Saltillo and Monclova range from 30 to 70 cm/s2 (0.03 to 0.07g). This earthquake might also produce or trigger significant landslides and rock falls in the Sierra Madre Oriental, where several cities are located (e.g. suburbs of Monterrey). Additionally, the Vs30 distribution for the state of Nuevo Leon and the cities of Linares and Monterrey are presented. The Vs30 data was obtained using seismic refraction profiling correlated with borehole information. According to NEHRP soil classification, sites classes A, B and C are dominant. Sites with class D occupy minor areas in both cities. Due to the semi-arid conditions in northeast Mexico, we obtained the highest values of Vs30 in Quaternary deposits (alluvium) cemented by caliche. Similar values of Vs30 were obtained in Reno and Las Vegas, Nevada. This work constitutes the first attempt at understanding and

  18. Hierarchical Acceleration of Multilevel Monte Carlo Methods for Computationally Expensive Simulations in Reservoir Modeling

    Science.gov (United States)

    Zhang, G.; Lu, D.; Webster, C.

    2014-12-01

    The rational management of oil and gas reservoir requires an understanding of its response to existing and planned schemes of exploitation and operation. Such understanding requires analyzing and quantifying the influence of the subsurface uncertainties on predictions of oil and gas production. As the subsurface properties are typically heterogeneous causing a large number of model parameters, the dimension independent Monte Carlo (MC) method is usually used for uncertainty quantification (UQ). Recently, multilevel Monte Carlo (MLMC) methods were proposed, as a variance reduction technique, in order to improve computational efficiency of MC methods in UQ. In this effort, we propose a new acceleration approach for MLMC method to further reduce the total computational cost by exploiting model hierarchies. Specifically, for each model simulation on a new added level of MLMC, we take advantage of the approximation of the model outputs constructed based on simulations on previous levels to provide better initial states of new simulations, which will help improve efficiency by, e.g. reducing the number of iterations in linear system solving or the number of needed time-steps. This is achieved by using mesh-free interpolation methods, such as Shepard interpolation and radial basis approximation. Our approach is applied to a highly heterogeneous reservoir model from the tenth SPE project. The results indicate that the accelerated MLMC can achieve the same accuracy as standard MLMC with a significantly reduced cost.

  19. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.

    Directory of Open Access Journals (Sweden)

    Ye Fang

    Full Text Available Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU. First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.

  20. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing

    Science.gov (United States)

    Fang, Ye; Ding, Yun; Feinstein, Wei P.; Koppelman, David M.; Moreno, Juana; Jarrell, Mark; Ramanujam, J.; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249. PMID:27420300

  1. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.

    Science.gov (United States)

    Fang, Ye; Ding, Yun; Feinstein, Wei P; Koppelman, David M; Moreno, Juana; Jarrell, Mark; Ramanujam, J; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.

  2. Computing Principal Eigenvectors of Large Web Graphs: Algorithms and Accelerations Related to PageRank and HITS

    Science.gov (United States)

    Nagasinghe, Iranga

    2010-01-01

    This thesis investigates and develops a few acceleration techniques for the search engine algorithms used in PageRank and HITS computations. PageRank and HITS methods are two highly successful applications of modern Linear Algebra in computer science and engineering. They constitute the essential technologies accounted for the immense growth and…

  3. Computing Principal Eigenvectors of Large Web Graphs: Algorithms and Accelerations Related to PageRank and HITS

    Science.gov (United States)

    Nagasinghe, Iranga

    2010-01-01

    This thesis investigates and develops a few acceleration techniques for the search engine algorithms used in PageRank and HITS computations. PageRank and HITS methods are two highly successful applications of modern Linear Algebra in computer science and engineering. They constitute the essential technologies accounted for the immense growth and…

  4. Teaching Strategic Text Review by Computer and Interaction with Student Characteristics.

    Science.gov (United States)

    Tobias, Sigmund

    1988-01-01

    Discussion of reading strategies focuses on a study of high school students that used three presentation modes via computer, with and without explanations about the value of text review. Highlights include pretests and posttests, student characteristics, test anxiety, prior knowledge, and implications for aptitude treatment interaction research…

  5. Command, Control, Communication, Computers and Information Technology (C4&IT). Strategic Plan, FY2008 - 2012

    Science.gov (United States)

    2008-01-01

    environment that includes migration of Microsite participants and documents 30 • External Web site via FatWire Content Management System (CMS) to...Integration CMS Content Management System CND Computer Network Defense COBIT Control Objectives for Information and related Technology

  6. Computer-mediated communication as a channel for social resistance : The strategic side of SIDE

    NARCIS (Netherlands)

    Spears, R; Lea, M; Corneliussen, RA; Postmes, T; Ter Haar, W

    2002-01-01

    In two studies, the authors tested predictions derived from the social identity model of deindividuation effects (SIDE) concerning the potential of computer-mediated communication (CMC) to serve as a means to resist powerful out-groups. Earlier research using the SIDE model indicates that the anonym

  7. Seize the “Broadband China”Strategic Opportunity to Accelerate the Comprehensive Development of Radio and Television Networks%抓住宽带中国战略机遇加快广电网络全面发展

    Institute of Scientific and Technical Information of China (English)

    李智勇

    2015-01-01

    China has outlined a series of policies for “Broadband China” strategy over recent years .China Telecom, China Unicom, China Mobile and China Radio and Television Network , the four state-owned giants have unveiled their respective strategies , measures and development goals .From the four nation telecommuni-cation giants perspective on development strategies and goals , each of them is express advantages and make full use of their leverage , then proceed to its ultimate , while sparing no effort to expand to other sectors .Inter-net, cloud computing , big data and “Broadband China” has become key factors in the development of modern enterprises , and “Internet plus”ignited the innovation fire across industry .Innovating during reform has be-come a fundamental rule and logic of economy and social development .China Radio and Television Networks must firmly seize the “Broadband China” strategic opportunity , while premeditate network convergence and next-generation broadcast television network strategic evolution , in order to accelerate the development of broadband intelligent network .%近两年,国家围绕“宽带中国”战略相继出台了一系列政策。中国电信、中国联通、中国移动和中国广电网络“四巨头”纷纷亮相表态,阐述各自的策略、措施和发展目标。从四大“国”字号运营商的策略和发展目标看,各自都在充分发挥自己的强项和优势,并把其做到极致,同时不遗余力地向其他领域扩展。互联网、云计算、大数据和“宽带中国”已成为现代企业发展的关键要素,而“互联网+”点燃了各个行业的创新之火,改革创新成为社会经济发展的基本规律和逻辑,广电网络运营商要紧紧抓住“宽带中国”的战略机遇,面向三网融合和下一代广播电视网络长期演进的战略目标,加快宽带智能网络的发展。

  8. Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU.

    Directory of Open Access Journals (Sweden)

    Wei-Jen Wang

    Full Text Available This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP in MATLAB by using external function calls to a graphics processing unit (GPU. DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis.

  9. Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU.

    Science.gov (United States)

    Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan

    2013-01-01

    This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis.

  10. Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU

    Science.gov (United States)

    Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan

    2013-01-01

    This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis. PMID:23840507

  11. An Experimental and Computational Study of a Shock-Accelerated Heavy Gas Cylinder

    Science.gov (United States)

    Zoldi, Cindy; Prestridge, Katherine; Tomkins, Christopher; Marr-Lyon, Mark; Rightley, Paul; Benjamin, Robert; Vorobieff, Peter

    2002-11-01

    We present updated results of an experimental and computational study that examines the evolution of a heavy gas (SF_6) cylinder surrounded by air when accelerated by a planar Mach 1.2 shock wave. From each shock tube experiment, we obtain one image of the experimental initial conditions and six images of the time evolution of the cylinder. Moreover, the implementation of Particle Image Velocimetry (PIV) also allows us to determine the velocity field at the last experimental time. Simulations incorporating the two-dimensional image of the experimental initial conditions are performed using the adaptive-mesh Eulerian code, RAGE. A computational study shows that agreement between the measured and computed velocities is achieved by decreasing the peak SF6 concentration to 60%, which was measured in the previous "gas curtain" experiments, and diffusing the air/SF6 interface in the experimental initial conditions. These modifications are consistent with the observation that the SF6 gas diffuses faster than the fog particles used to track the gas. Images of the experimental initial conditions, obtained using planar laser Rayleigh scattering, quantifies the diffusion lag between the SF6 gas and the fog particles.

  12. Parallelizing Epistasis Detection in GWAS on FPGA and GPU-Accelerated Computing Systems.

    Science.gov (United States)

    González-Domínguez, Jorge; Wienbrandt, Lars; Kässens, Jan Christian; Ellinghaus, David; Schimmler, Manfred; Schmidt, Bertil

    2015-01-01

    High-throughput genotyping technologies (such as SNP-arrays) allow the rapid collection of up to a few million genetic markers of an individual. Detecting epistasis (based on 2-SNP interactions) in Genome-Wide Association Studies is an important but time consuming operation since statistical computations have to be performed for each pair of measured markers. Computational methods to detect epistasis therefore suffer from prohibitively long runtimes; e.g., processing a moderately-sized dataset consisting of about 500,000 SNPs and 5,000 samples requires several days using state-of-the-art tools on a standard 3 GHz CPU. In this paper, we demonstrate how this task can be accelerated using a combination of fine-grained and coarse-grained parallelism on two different computing systems. The first architecture is based on reconfigurable hardware (FPGAs) while the second architecture uses multiple GPUs connected to the same host. We show that both systems can achieve speedups of around four orders-of-magnitude compared to the sequential implementation. This significantly reduces the runtimes for detecting epistasis to only a few minutes for moderately-sized datasets and to a few hours for large-scale datasets.

  13. X-ray beam hardening correction for measuring density in linear accelerator industrial computed tomography

    Institute of Scientific and Technical Information of China (English)

    ZHOU Ri-Feng; WANG Jue; CHEN Wei-Min

    2009-01-01

    Due to X-ray attenuation being approximately proportional to material density, it is possible to measure the inner density through Industrial Computed Tomography (ICT) images accurately. In practice, however, a number of factors including the non-linear effects of beam hardening and diffuse scattered radia-tion complicate the quantitative measurement of density variations in materials. This paper is based on the linearization method of beam hardening correction, and uses polynomial fitting coefficient which is obtained by the curvature of iron polychromatic beam data to fit other materials. Through theoretical deduction, the paper proves that the density measure error is less than 2% if using pre-filters to make the spectrum of linear accelerator range mainly 0.3 MeV to 3 MeV. Experiment had been set up at an ICT system with a 9 MeV electron linear accelerator. The result is satisfactory. This technique makes the beam hardening correction easy and simple, and it is valuable for measuring the ICT density and making use of the CT images to recognize materials.

  14. Strategic Adaptation

    DEFF Research Database (Denmark)

    Andersen, Torben Juul

    2015-01-01

    This article provides an overview of theoretical contributions that have influenced the discourse around strategic adaptation including contingency perspectives, strategic fit reasoning, decision structure, information processing, corporate entrepreneurship, and strategy process. The related...... concepts of strategic renewal, dynamic managerial capabilities, dynamic capabilities, and strategic response capabilities are discussed and contextualized against strategic responsiveness. The insights derived from this article are used to outline the contours of a dynamic process of strategic adaptation...

  15. A Fast GPU-accelerated Mixed-precision Strategy for Fully NonlinearWater Wave Computations

    DEFF Research Database (Denmark)

    Glimberg, Stefan Lemvig; Engsig-Karup, Allan Peter; Madsen, Morten G.

    2011-01-01

    We present performance results of a mixed-precision strategy developed to improve a recently developed massively parallel GPU-accelerated tool for fast and scalable simulation of unsteady fully nonlinear free surface water waves over uneven depths (Engsig-Karup et.al. 2011). The underlying wave...... model is based on a potential flow formulation, which requires efficient solution of a Laplace problem at large-scales. We report recent results on a new mixed-precision strategy for efficient iterative high-order accurate and scalable solution of the Laplace problem using a multigrid......-preconditioned defect correction method. The improved strategy improves the performance by exploiting architectural features of modern GPUs for mixed precision computations and is tested in a recently developed generic library for fast prototyping of PDE solvers. The new wave tool is applicable to solve and analyze...

  16. Flexusi Interface Builder For Computer Based Accelerator Monitoring And Control System

    CERN Document Server

    Kurakin, V G; Kurakin, P V

    2004-01-01

    We have developed computer code for any desired graphics user interface designing for monitoring and control system at the executable level. This means that operator can build up measurement console consisting of virtual devices before or even during real experiment without recompiling source file. Such functionality results in number of advantages comparing with traditional programming. First of all any risk disappears to introduce bug into source code. Another important thing is the fact the both program developers and operator staff do not interface in developing ultimate product (measurement console). Thus, small team without detailed project can design even very complicated monitoring and control system. For the reason mentioned below, approach suggested is especially helpful for large complexes to be monitored and control, accelerator being among them. The program code consists of several modules, responsible for data acquisition, control and representation. Borland C++ Builder technologies based on VCL...

  17. GPU-accelerated computational tool for studying the effectiveness of asteroid disruption techniques

    Science.gov (United States)

    Zimmerman, Ben J.; Wie, Bong

    2016-10-01

    This paper presents the development of a new Graphics Processing Unit (GPU) accelerated computational tool for asteroid disruption techniques. Numerical simulations are completed using the high-order spectral difference (SD) method. Due to the compact nature of the SD method, it is well suited for implementation with the GPU architecture, hence solutions are generated at orders of magnitude faster than the Central Processing Unit (CPU) counterpart. A multiphase model integrated with the SD method is introduced, and several asteroid disruption simulations are conducted, including kinetic-energy impactors, multi-kinetic energy impactor systems, and nuclear options. Results illustrate the benefits of using multi-kinetic energy impactor systems when compared to a single impactor system. In addition, the effectiveness of nuclear options is observed.

  18. An improved coarse-grained parallel algorithm for computational acceleration of ordinary Kriging interpolation

    Science.gov (United States)

    Hu, Hongda; Shu, Hong

    2015-05-01

    Heavy computation limits the use of Kriging interpolation methods in many real-time applications, especially with the ever-increasing problem size. Many researchers have realized that parallel processing techniques are critical to fully exploit computational resources and feasibly solve computation-intensive problems like Kriging. Much research has addressed the parallelization of traditional approach to Kriging, but this computation-intensive procedure may not be suitable for high-resolution interpolation of spatial data. On the basis of a more effective serial approach, we propose an improved coarse-grained parallel algorithm to accelerate ordinary Kriging interpolation. In particular, the interpolation task of each unobserved point is considered as a basic parallel unit. To reduce time complexity and memory consumption, the large right hand side matrix in the Kriging linear system is transformed and fixed at only two columns and therefore no longer directly relevant to the number of unobserved points. The MPI (Message Passing Interface) model is employed to implement our parallel programs in a homogeneous distributed memory system. Experimentally, the improved parallel algorithm performs better than the traditional one in spatial interpolation of annual average precipitation in Victoria, Australia. For example, when the number of processors is 24, the improved algorithm keeps speed-up at 20.8 while the speed-up of the traditional algorithm only reaches 9.3. Likewise, the weak scaling efficiency of the improved algorithm is nearly 90% while that of the traditional algorithm almost drops to 40% with 16 processors. Experimental results also demonstrate that the performance of the improved algorithm is enhanced by increasing the problem size.

  19. Strategic Implications for E-Business Organizations in the Ubiquitous Computing Economy

    Institute of Scientific and Technical Information of China (English)

    YUM Jihwan; KIM Hyoungdo

    2004-01-01

    The ubiquitous economy brings both pros and cons for the organizations. The third space emerged by the development of ubiquitous computing generates new concept of community. The community is tightly coupled with people, products, and systems. Organizational strategies need to be reshaped for the changing environment in the third space and community. Organizational structure also needs to change for community serving organization. Community serving concept equipped with the standardized technology will be essential. One of the key technologies, RFID service will play a key role to acknowledge identification and services required. When the needs for sensing the environment increase,technological requirement such as the ubiquitous sensor network (USN) will be critically needed.

  20. Efficient acceleration of mutual information computation for nonrigid registration using CUDA.

    Science.gov (United States)

    Ikeda, Kei; Ino, Fumihiko; Hagihara, Kenichi

    2014-05-01

    In this paper, we propose an efficient acceleration method for the nonrigid registration of multimodal images that uses a graphics processing unit. The key contribution of our method is efficient utilization of on-chip memory for both normalized mutual information (NMI) computation and hierarchical B-spline deformation, which compose a well-known registration algorithm. We implement this registration algorithm as a compute unified device architecture program with an efficient parallel scheme and several optimization techniques such as hierarchical data organization, data reuse, and multiresolution representation. We experimentally evaluate our method with four clinical datasets consisting of up to 512 × 512 × 296 voxels. We find that exploitation of on-chip memory achieves a 12-fold increase in speed over an off-chip memory version and, therefore, it increases the efficiency of parallel execution from 4% to 46%. We also find that our method running on a GeForce GTX 580 card is approximately 14 times faster than a fully optimized CPU-based implementation running on four cores. Some multimodal registration results are also provided to understand the limitation of our method. We believe that our highly efficient method, which completes an alignment task within a few tens of seconds, will be useful to realize rapid nonrigid registration.

  1. SecureMed: Secure Medical Computation using GPU-Accelerated Homomorphic Encryption Scheme.

    Science.gov (United States)

    Khedr, Alhassan; Gulak, Glenn

    2017-01-23

    Sharing the medical records of individuals among healthcare providers and researchers around the world can accelerate advances in medical research. While the idea seems increasingly practical due to cloud data services, maintaining patient privacy is of paramount importance. Standard encryption algorithms help protect sensitive data from outside attackers but they cannot be used to compute on this sensitive data while being encrypted. Homomorphic Encryption (HE) presents a very useful tool that can compute on encrypted data without the need to decrypt it. In this work, we describe an optimized NTRUbased implementation of the GSW homomorphic encryption scheme. Our results show a factor of 58 × improvement in CPU performance compared to other recent work on encrypted medical data under the same security settings. Our system is built to be easily portable to GPUs resulting in an additional speedup of up to a factor of 104 × (and 410 × ) to offer an overall speedup of 6085 × (and 24011 × ) using a single GPU (or four GPUs), respectively.

  2. Computations of longitudinal electron dynamics in the recirculating cw RF accelerator-recuperator for the high average power FEL

    Science.gov (United States)

    Sokolov, A. S.; Vinokurov, N. A.

    1994-03-01

    The use of optimal longitudinal phase-energy motion conditions for bunched electrons in a recirculating RF accelerator gives the possibility to increase the final electron peak current and, correspondingly, the FEL gain. The computer code RECFEL, developed for simulations of the longitudinal compression of electron bunches with high average current, essentially loading the cw RF cavities of the recirculator-recuperator, is briefly described and illustrated by some computational results.

  3. Computation of Material Demand in the Risk Assessment and Mitigation Framework for Strategic Materials (RAMF-SM) Process

    Science.gov (United States)

    2015-08-01

    Strategic Materials (RAMF-SM) Process Eleanor L. Schwartz James S. Thomason, Project Leader INSTITUTE FOR DEFENSE ANALYSES 4850 Mark Center Drive Alexandria...the Risk Assessment and Mitigation Framework for Strategic Materials (RAMF-SM) Process Eleanor L. Schwartz James S. Thomason, Project Leader iii...Inter-industry Forecasting Project at the University of Maryland (INFORUM), College Park , MD, 2001. Meade, Douglas S., et al. ILIAD. Inter-industry

  4. Accelerating groundwater flow simulation in MODFLOW using JASMIN-based parallel computing.

    Science.gov (United States)

    Cheng, Tangpei; Mo, Zeyao; Shao, Jingli

    2014-01-01

    To accelerate the groundwater flow simulation process, this paper reports our work on developing an efficient parallel simulator through rebuilding the well-known software MODFLOW on JASMIN (J Adaptive Structured Meshes applications Infrastructure). The rebuilding process is achieved by designing patch-based data structure and parallel algorithms as well as adding slight modifications to the compute flow and subroutines in MODFLOW. Both the memory requirements and computing efforts are distributed among all processors; and to reduce communication cost, data transfers are batched and conveniently handled by adding ghost nodes to each patch. To further improve performance, constant-head/inactive cells are tagged and neglected during the linear solving process and an efficient load balancing strategy is presented. The accuracy and efficiency are demonstrated through modeling three scenarios: The first application is a field flow problem located at Yanming Lake in China to help design reasonable quantity of groundwater exploitation. Desirable numerical accuracy and significant performance enhancement are obtained. Typically, the tagged program with load balancing strategy running on 40 cores is six times faster than the fastest MICCG-based MODFLOW program. The second test is simulating flow in a highly heterogeneous aquifer. The AMG-based JASMIN program running on 40 cores is nine times faster than the GMG-based MODFLOW program. The third test is a simplified transient flow problem with the order of tens of millions of cells to examine the scalability. Compared to 32 cores, parallel efficiency of 77 and 68% are obtained on 512 and 1024 cores, respectively, which indicates impressive scalability.

  5. Users' guide for the Accelerated Leach Test Computer Program

    Energy Technology Data Exchange (ETDEWEB)

    Fuhrmann, M.; Heiser, J.H.; Pietrzak, R.; Franz, Eena-Mai; Colombo, P.

    1990-11-01

    This report is a step-by-step guide for the Accelerated Leach Test (ALT) Computer Program developed to accompany a new leach test for solidified waste forms. The program is designed to be used as a tool for performing the calculations necessary to analyze leach test data, a modeling program to determine if diffusion is the operating leaching mechanism (and, if not, to indicate other possible mechanisms), and a means to make extrapolations using the diffusion models. The ALT program contains four mathematical models that can be used to represent the data. The leaching mechanisms described by these models are: (1) diffusion through a semi-infinite medium (for low fractional releases), (2) diffusion through a finite cylinder (for high fractional releases), (3) diffusion plus partitioning of the source term, (4) solubility limited leaching. Results are presented as a graph containing the experimental data and the best-fit model curve. Results can also be output as LOTUS 1-2-3 files. 2 refs.

  6. Graphics Processing Unit-Accelerated Code for Computing Second-Order Wiener Kernels and Spike-Triggered Covariance.

    Science.gov (United States)

    Mano, Omer; Clark, Damon A

    2017-01-01

    Sensory neuroscience seeks to understand and predict how sensory neurons respond to stimuli. Nonlinear components of neural responses are frequently characterized by the second-order Wiener kernel and the closely-related spike-triggered covariance (STC). Recent advances in data acquisition have made it increasingly common and computationally intensive to compute second-order Wiener kernels/STC matrices. In order to speed up this sort of analysis, we developed a graphics processing unit (GPU)-accelerated module that computes the second-order Wiener kernel of a system's response to a stimulus. The generated kernel can be easily transformed for use in standard STC analyses. Our code speeds up such analyses by factors of over 100 relative to current methods that utilize central processing units (CPUs). It works on any modern GPU and may be integrated into many data analysis workflows. This module accelerates data analysis so that more time can be spent exploring parameter space and interpreting data.

  7. Computer simulations for a deceleration and radio frequency quadrupole instrument for accelerator ion beams

    Energy Technology Data Exchange (ETDEWEB)

    Eliades, J.A., E-mail: j.eliades@alum.utoronto.ca; Kim, J.K.; Song, J.H.; Yu, B.Y.

    2015-10-15

    Radio-frequency quadrupole (RFQ) technology incorporated into the low energy ion beam line of an accelerator system can greatly broaden the range of applications and facilitate unique experimental capabilities. However, ten’s of keV kinetic energy negative ion beams with large emittances and energy spreads must first be decelerated down to <100 eV for ion–gas interactions, placing special demands on the deceleration optics and RFQ design. A system with large analyte transmission in the presence of gas has so far proven challenging. Presented are computer simulations using SIMION 8.1 for an ion deceleration and RFQ ion guide instrument design. Code included user-defined gas pressure gradients and threshold energies for ion–gas collisional losses. Results suggest a 3 mm diameter, 35 keV {sup 36}Cl{sup −} ion beam with 8 eV full-width half maximum Gaussian energy spread and 35 mrad angular divergence can be efficiently decelerated and then cooled in He gas, with a maximum pressure of 7 mTorr, to 2 eV within 450 mm in the RFQs. Vacuum transmissions were 100%. Ion energy distributions at initial RFQ capture are shown to be much larger than the average value expected from the deceleration potential and this appears to be a general result arising from kinetic energy gain in the RFQ field. In these simulations, a potential for deceleration to 25 eV resulted in a 30 eV average energy distribution with a small fraction of ions >70 eV.

  8. Thinking Strategically.

    Science.gov (United States)

    Jeffress, Conway

    2000-01-01

    Asserts that community college leaders must think strategically and understand the difference between what is important and immediate, and what is strategic and essential to the long-term survival of a college. States that thinking strategically aligns decision-making and actions with the core purpose of the college; produces core competencies in…

  9. Strategic Priming with Multiple Antigens can Yield Memory Cell Phenotypes Optimized for Infection with Mycobacterium tuberculosis: A Computational Study

    Science.gov (United States)

    Ziraldo, Cordelia; Gong, Chang; Kirschner, Denise E.; Linderman, Jennifer J.

    2016-01-01

    Lack of an effective vaccine results in 9 million new cases of tuberculosis (TB) every year and 1.8 million deaths worldwide. Although many infants are vaccinated at birth with BCG (an attenuated M. bovis), this does not prevent infection or development of TB after childhood. Immune responses necessary for prevention of infection or disease are still unknown, making development of effective vaccines against TB challenging. Several new vaccines are ready for human clinical trials, but these trials are difficult and expensive; especially challenging is determining the appropriate cellular response necessary for protection. The magnitude of an immune response is likely key to generating a successful vaccine. Characteristics such as numbers of central memory (CM) and effector memory (EM) T cells responsive to a diverse set of epitopes are also correlated with protection. Promising vaccines against TB contain mycobacterial subunit antigens (Ag) present during both active and latent infection. We hypothesize that protection against different key immunodominant antigens could require a vaccine that produces different levels of EM and CM for each Ag-specific memory population. We created a computational model to explore EM and CM values, and their ratio, within what we term Memory Design Space. Our model captures events involved in T cell priming within lymph nodes and tracks their circulation through blood to peripheral tissues. We used the model to test whether multiple Ag-specific memory cell populations could be generated with distinct locations within Memory Design Space at a specific time point post vaccination. Boosting can further shift memory populations to memory cell ratios unreachable by initial priming events. By strategically varying antigen load, properties of cellular interactions within the LN, and delivery parameters (e.g., number of boosts) of multi-subunit vaccines, we can generate multiple Ag-specific memory populations that cover a wide range of

  10. FDTD Acceleration for Cylindrical Resonator Design Based on the Hybrid of Single and Double Precision Floating-Point Computation

    Directory of Open Access Journals (Sweden)

    Hasitha Muthumala Waidyasooriya

    2014-01-01

    Full Text Available Acceleration of FDTD (finite-difference time-domain is very important for the fields such as computational electromagnetic simulation. We consider the FDTD simulation model of cylindrical resonator design that requires double precision floating-point and cannot be done using single precision. Conventional FDTD acceleration methods have a common problem of memory-bandwidth limitation due to the large amount of parallel data access. To overcome this problem, we propose a hybrid of single and double precision floating-point computation method that reduces the data-transfer amount. We analyze the characteristics of the FDTD simulation to find out when we can use single precision instead of double precision. According to the experimental results, we achieved over 15 times of speed-up compared to the CPU single-core implementation and over 1.52 times of speed-up compared to the conventional GPU-based implementation.

  11. Strategic Entrepreneurship

    DEFF Research Database (Denmark)

    Klein, Peter G.; Barney, Jay B.; Foss, Nicolai Juul

    and capture value through resource acquisition and competitive posi-tioning. (2) Opportunity-seeking and advantage-seeking—the former the central subject of the entrepreneurship field, the latter the central subject of the strategic management field—are pro-cesses that should be considered jointly. This entry......Strategic entrepreneurship is a newly recognized field that draws, not surprisingly, from the fields of strategic management and entrepreneurship. The field emerged officially with the 2001 special issue of the Strategic Management Journal on “strategic entrepreneurship”; the first dedicated...... periodical, the Strategic Entrepreneurship Journal, appeared in 2007. Strategic entrepreneurship is built around two core ideas. (1) Strategy formulation and execution involves attributes that are fundamentally entrepreneurial, such as alertness, creativity, and judgment, and entrepreneurs try to create...

  12. Strategic Entrepreneurship

    DEFF Research Database (Denmark)

    Klein, Peter G.; Barney, Jay B.; Foss, Nicolai Juul

    Strategic entrepreneurship is a newly recognized field that draws, not surprisingly, from the fields of strategic management and entrepreneurship. The field emerged officially with the 2001 special issue of the Strategic Management Journal on “strategic entrepreneurship”; the first dedicated...... and capture value through resource acquisition and competitive posi-tioning. (2) Opportunity-seeking and advantage-seeking—the former the central subject of the entrepreneurship field, the latter the central subject of the strategic management field—are pro-cesses that should be considered jointly. This entry...... periodical, the Strategic Entrepreneurship Journal, appeared in 2007. Strategic entrepreneurship is built around two core ideas. (1) Strategy formulation and execution involves attributes that are fundamentally entrepreneurial, such as alertness, creativity, and judgment, and entrepreneurs try to create...

  13. Performance analysis and acceleration of cross-correlation computation using FPGA implementation for digital signal processing

    Science.gov (United States)

    Selma, R.

    2016-09-01

    Paper describes comparison of cross-correlation computation speed of most commonly used computation platforms (CPU, GPU) with an FPGA-based design. It also describes the structure of cross-correlation unit implemented for testing purposes. Speedup of computations was achieved using FPGA-based design, varying between 16 and 5400 times compared to CPU computations and between 3 and 175 times compared to GPU computations.

  14. Strategizing Communication

    DEFF Research Database (Denmark)

    Gulbrandsen, Ib Tunby; Just, Sine Nørholm

    not determine the success of strategic communication. Rather, contextual factors such as competition, technological developments, global cultural trends and local traditions as well as employees’ skills and attitudes will determine the organization’s communicative success. This holds true regardless......Strategizing Communication offers a unique perspective on the theory and practice of strategic communication. Written for students and practitioners interested in learning about and acquiring tools for dealing with the technological, environmental and managerial challenges, which organizations face...... when communicating in today’s mediascape, this book presents an array of theories, concepts and models through which we can understand and practice communication strategically. The core of the argument is in the title: strategizing – meaning the act of making something strategic. This entails looking...

  15. Observed differences in upper extremity forces, muscle efforts, postures, velocities and accelerations across computer activities in a field study of office workers

    NARCIS (Netherlands)

    Garza, J.L.B.; Eijckelhof, B.H.W.; Johnson, P.W.; Raina, S.M.; Rynell, P.W.; Huysmans, M.A.; Dieën, J.H. van; Beek, A.J. van der; Blatter, B.M.; Dennerlein, J.T.

    2012-01-01

    This study, a part of the PRedicting Occupational biomechanics in OFfice workers (PROOF) study, investigated whether there are differences in field-measured forces, muscle efforts, postures, velocities and accelerations across computer activities. These parameters were measured continuously for 120

  16. Observed differences in upper extremity forces, muscle efforts, postures, velocities and accelerations across computer activities in a field study of office workers

    NARCIS (Netherlands)

    Garza, J.L.B.; Eijckelhof, B.H.W.; Johnson, P.W.; Raina, S.M.; Rynell, P.W.; Huysmans, M.A.; Dieën, J.H. van; Beek, A.J. van der; Blatter, B.M.; Dennerlein, J.T.

    2012-01-01

    This study, a part of the PRedicting Occupational biomechanics in OFfice workers (PROOF) study, investigated whether there are differences in field-measured forces, muscle efforts, postures, velocities and accelerations across computer activities. These parameters were measured continuously for 120

  17. A computational approach for identifying the chemical factors involved in the glycosaminoglycans-mediated acceleration of amyloid fibril formation.

    Directory of Open Access Journals (Sweden)

    Elodie Monsellier

    Full Text Available BACKGROUND: Amyloid fibril formation is the hallmark of many human diseases, including Alzheimer's disease, type II diabetes and amyloidosis. Amyloid fibrils deposit in the extracellular space and generally co-localize with the glycosaminoglycans (GAGs of the basement membrane. GAGs have been shown to accelerate the formation of amyloid fibrils in vitro for a number of protein systems. The high number of data accumulated so far has created the grounds for the construction of a database on the effects of a number of GAGs on different proteins. METHODOLOGY/PRINCIPAL FINDINGS: In this study, we have constructed such a database and have used a computational approach that uses a combination of single parameter and multivariate analyses to identify the main chemical factors that determine the GAG-induced acceleration of amyloid formation. We show that the GAG accelerating effect is mainly governed by three parameters that account for three-fourths of the observed experimental variability: the GAG sulfation state, the solute molarity, and the ratio of protein and GAG molar concentrations. We then combined these three parameters into a single equation that predicts, with reasonable accuracy, the acceleration provided by a given GAG in a given condition. CONCLUSIONS/SIGNIFICANCE: In addition to shedding light on the chemical determinants of the protein:GAG interaction and to providing a novel mathematical predictive tool, our findings highlight the possibility that GAGs may not have such an accelerating effect on protein aggregation under the conditions existing in the basement membrane, given the values of salt molarity and protein:GAG molar ratio existing under such conditions.

  18. Graphics Processing Unit-Accelerated Code for Computing Second-Order Wiener Kernels and Spike-Triggered Covariance

    Science.gov (United States)

    Mano, Omer

    2017-01-01

    Sensory neuroscience seeks to understand and predict how sensory neurons respond to stimuli. Nonlinear components of neural responses are frequently characterized by the second-order Wiener kernel and the closely-related spike-triggered covariance (STC). Recent advances in data acquisition have made it increasingly common and computationally intensive to compute second-order Wiener kernels/STC matrices. In order to speed up this sort of analysis, we developed a graphics processing unit (GPU)-accelerated module that computes the second-order Wiener kernel of a system’s response to a stimulus. The generated kernel can be easily transformed for use in standard STC analyses. Our code speeds up such analyses by factors of over 100 relative to current methods that utilize central processing units (CPUs). It works on any modern GPU and may be integrated into many data analysis workflows. This module accelerates data analysis so that more time can be spent exploring parameter space and interpreting data. PMID:28068420

  19. Strategic Supply

    Science.gov (United States)

    2003-01-01

    March 7, 2003. [29] Velis, Lil. “ Publishing Industry : Supply Chain Strategy in Action,” briefing presented to ICAF Strategic Supply Seminar, May 6...Lambert and Stock, page 48. [32] Velis, Lil. “ Publishing Industry : Supply Chain Strategy in Action,” briefing presented to ICAF Strategic Supply

  20. Accelerating statistical image reconstruction algorithms for fan-beam x-ray CT using cloud computing

    Science.gov (United States)

    Srivastava, Somesh; Rao, A. Ravishankar; Sheinin, Vadim

    2011-03-01

    Statistical image reconstruction algorithms potentially offer many advantages to x-ray computed tomography (CT), e.g. lower radiation dose. But, their adoption in practical CT scanners requires extra computation power, which is traditionally provided by incorporating additional computing hardware (e.g. CPU-clusters, GPUs, FPGAs etc.) into a scanner. An alternative solution is to access the required computation power over the internet from a cloud computing service, which is orders-of-magnitude more cost-effective. This is because users only pay a small pay-as-you-go fee for the computation resources used (i.e. CPU time, storage etc.), and completely avoid purchase, maintenance and upgrade costs. In this paper, we investigate the benefits and shortcomings of using cloud computing for statistical image reconstruction. We parallelized the most time-consuming parts of our application, the forward and back projectors, using MapReduce, the standard parallelization library on clouds. From preliminary investigations, we found that a large speedup is possible at a very low cost. But, communication overheads inside MapReduce can limit the maximum speedup, and a better MapReduce implementation might become necessary in the future. All the experiments for this paper, including development and testing, were completed on the Amazon Elastic Compute Cloud (EC2) for less than $20.

  1. The Role of Implicit Motives in Strategic Decision-Making: Computational Models of Motivated Learning and the Evolution of Motivated Agents

    Directory of Open Access Journals (Sweden)

    Kathryn Merrick

    2015-11-01

    Full Text Available Individual behavioral differences in humans have been linked to measurable differences in their mental activities, including differences in their implicit motives. In humans, individual differences in the strength of motives such as power, achievement and affiliation have been shown to have a significant impact on behavior in social dilemma games and during other kinds of strategic interactions. This paper presents agent-based computational models of power-, achievement- and affiliation-motivated individuals engaged in game-play. The first model captures learning by motivated agents during strategic interactions. The second model captures the evolution of a society of motivated agents. It is demonstrated that misperception, when it is a result of motivation, causes agents with different motives to play a given game differently. When motivated agents who misperceive a game are present in a population, higher explicit payoff can result for the population as a whole. The implications of these results are discussed, both for modeling human behavior and for designing artificial agents with certain salient behavioral characteristics.

  2. Study of irradiation induced restructuring of high burnup fuel - Use of computer and accelerator for fuel science and engineering -

    Energy Technology Data Exchange (ETDEWEB)

    Sataka, M.; Ishikawa, N.; Chimn, Y.; Nakamura, J.; Amaya, M. [Japan Atomic Energy Agency, Naka Gun (Japan); Iwasawa, M.; Ohnuma, T.; Sonoda, T. [Central Research Institute of Electric Power Industry, Tokyo (Japan); Kinoshita, M.; Geng, H. Y.; Chen, Y.; Kaneta, Y. [The Univ. of Tokyo, Tokyo (Japan); Yasunaga, K.; Matsumura, S.; Yasuda, K. [Kyushu Univ., Motooka (Japan); Iwase [Osaka Prefecture Univ., Osaka (Japan); Ichinomiya, T.; Nishiuran, Y. [Hokkaido Univ., Kitaku (Japan); Matzke, HJ. [Academy of Ceramics, Karlsruhe (Germany)

    2008-10-15

    In order to develop advanced fuel for future LWR reactors, trials were made to simulate the high burnup restructuring of the ceramics fuel, using accelerator irradiation out of pile and with computer simulation. The target is to reproduce the principal complex process as a whole. The reproduction of the grain subdivision (sub grain formation) was successful at experiments with sequential combined irradiation. It was made by recovery process of the accumulated dislocations, making cells and sub-boundaries at grain boundaries and pore surfaces. Details of the grain sub division mechanism is now in front of us outside of the reactor. Extensive computer science studies, first principle and molecular dynamics gave behavior of fission gas atoms and interstitial oxygen, assisting the high burnup restructuring.

  3. Computation of thermal properties via 3D homogenization of multiphase materials using FFT-based accelerated scheme

    CERN Document Server

    Lemaitre, Sophie; Choi, Daniel; Karamian, Philippe

    2015-01-01

    In this paper we study the thermal effective behaviour for 3D multiphase composite material consisting of three isotropic phases which are the matrix, the inclusions and the coating media. For this purpose we use an accelerated FFT-based scheme initially proposed in Eyre and Milton (1999) to evaluate the thermal conductivity tensor. Matrix and spherical inclusions media are polymers with similar properties whereas the coating medium is metallic hence better conducting. Thus, the contrast between the coating and the others media is very large. For our study, we use RVEs (Representative volume elements) generated by RSA (Random Sequential Adsorption) method developed in our previous works, then, we compute effective thermal properties using an FFT-based homogenization technique validated by comparison with the direct finite elements method. We study the thermal behaviour of the 3D-multiphase composite material and we show what features should be taken into account to make the computational approach efficient.

  4. [Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].

    Science.gov (United States)

    Furuta, Takuya; Sato, Tatsuhiko

    2015-01-01

    Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.

  5. GPU-accelerated computing for Lagrangian coherent structures of multi-body gravitational regimes

    Science.gov (United States)

    Lin, Mingpei; Xu, Ming; Fu, Xiaoyu

    2017-04-01

    Based on a well-established theoretical foundation, Lagrangian Coherent Structures (LCSs) have elicited widespread research on the intrinsic structures of dynamical systems in many fields, including the field of astrodynamics. Although the application of LCSs in dynamical problems seems straightforward theoretically, its associated computational cost is prohibitive. We propose a block decomposition algorithm developed on Compute Unified Device Architecture (CUDA) platform for the computation of the LCSs of multi-body gravitational regimes. In order to take advantage of GPU's outstanding computing properties, such as Shared Memory, Constant Memory, and Zero-Copy, the algorithm utilizes a block decomposition strategy to facilitate computation of finite-time Lyapunov exponent (FTLE) fields of arbitrary size and timespan. Simulation results demonstrate that this GPU-based algorithm can satisfy double-precision accuracy requirements and greatly decrease the time needed to calculate final results, increasing speed by approximately 13 times. Additionally, this algorithm can be generalized to various large-scale computing problems, such as particle filters, constellation design, and Monte-Carlo simulation.

  6. Intro - High Performance Computing for 2015 HPC Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Klitsner, Tom [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    The recent Executive Order creating the National Strategic Computing Initiative (NSCI) recognizes the value of high performance computing for economic competitiveness and scientific discovery and commits to accelerate delivery of exascale computing. The HPC programs at Sandia –the NNSA ASC program and Sandia’s Institutional HPC Program– are focused on ensuring that Sandia has the resources necessary to deliver computation in the national interest.

  7. Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment

    Science.gov (United States)

    Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.

    2013-12-01

    Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a

  8. Strategic Responsiveness

    DEFF Research Database (Denmark)

    Pedersen, Carsten; Juul Andersen, Torben

    decision making is often conceived as ‘standing on the two feet’ of deliberate or intended strategic decisions by top management and emergent strategic decisions pursued by lower-level managers and employees. In this view, the paper proposes that bottom-up initiatives have a hard time surfacing...... in hierarchical organizations and that lowerlevel managers and employees, therefore, pursue various strategies to bypass the official strategy processes to act on emerging strategic issues and adapt to changing environmental conditions.......The analysis of major resource committing decisions is central focus in the strategy field, but despite decades of rich conceptual and empirical research we still seem distant from a level of understanding that can guide corporate practices under dynamic and unpredictable conditions. Strategic...

  9. Strategic Forecasting

    DEFF Research Database (Denmark)

    Duus, Henrik Johannsen

    2016-01-01

    Purpose: The purpose of this article is to present an overview of the area of strategic forecasting and its research directions and to put forward some ideas for improving management decisions. Design/methodology/approach: This article is conceptual but also informed by the author’s long contact...... and collaboration with various business firms. It starts by presenting an overview of the area and argues that the area is as much a way of thinking as a toolbox of theories and methodologies. It then spells out a number of research directions and ideas for management. Findings: Strategic forecasting is seen...... as a rebirth of long range planning, albeit with new methods and theories. Firms should make the building of strategic forecasting capability a priority. Research limitations/implications: The article subdivides strategic forecasting into three research avenues and suggests avenues for further research efforts...

  10. Accelerating the discovery of space-time patterns of infectious diseases using parallel computing.

    Science.gov (United States)

    Hohl, Alexander; Delmelle, Eric; Tang, Wenwu; Casas, Irene

    2016-11-01

    Infectious diseases have complex transmission cycles, and effective public health responses require the ability to monitor outbreaks in a timely manner. Space-time statistics facilitate the discovery of disease dynamics including rate of spread and seasonal cyclic patterns, but are computationally demanding, especially for datasets of increasing size, diversity and availability. High-performance computing reduces the effort required to identify these patterns, however heterogeneity in the data must be accounted for. We develop an adaptive space-time domain decomposition approach for parallel computation of the space-time kernel density. We apply our methodology to individual reported dengue cases from 2010 to 2011 in the city of Cali, Colombia. The parallel implementation reaches significant speedup compared to sequential counterparts. Density values are visualized in an interactive 3D environment, which facilitates the identification and communication of uneven space-time distribution of disease events. Our framework has the potential to enhance the timely monitoring of infectious diseases.

  11. Accelerating selected columns of the density matrix computations via approximate column selection

    CERN Document Server

    Damle, Anil; Ying, Lexing

    2016-01-01

    Localized representation of the Kohn-Sham subspace plays an important role in quantum chemistry and materials science. The recently developed selected columns of the density matrix (SCDM) method [J. Chem. Theory Comput. 11, 1463, 2015] is a simple and robust procedure for finding a localized representation of a set of Kohn-Sham orbitals from an insulating system. The SCDM method allows the direct construction of a well conditioned (or even orthonormal) and localized basis for the Kohn-Sham subspace. The SCDM procedure avoids the use of an optimization procedure and does not depend on any adjustable parameters. The most computationally expensive step of the SCDM method is a column pivoted QR factorization that identifies the important columns for constructing the localized basis set. In this paper, we develop a two stage approximate column selection strategy to find the important columns at much lower computational cost. We demonstrate the effectiveness of this process using a dissociation process of a BH$_{3}...

  12. Acceleration of FEM-based transfer matrix computation for forward and inverse problems of electrocardiography.

    Science.gov (United States)

    Farina, Dmytro; Jiang, Y; Dössel, O

    2009-12-01

    The distributions of transmembrane voltage (TMV) within the cardiac tissue are linearly connected with the patient's body surface potential maps (BSPMs) at every time instant. The matrix describing the relation between the respective distributions is referred to as the transfer matrix. This matrix can be employed to carry out forward calculations in order to find the BSPM for any given distribution of TMV inside the heart. Its inverse can be used to reconstruct the cardiac activity non-invasively, which can be an important diagnostic tool in the clinical practice. The computation of this matrix using the finite element method can be quite time-consuming. In this work, a method is proposed allowing to speed up this process by computing an approximate transfer matrix instead of the precise one. The method is tested on three realistic anatomical models of real-world patients. It is shown that the computation time can be reduced by 50% without loss of accuracy.

  13. PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)

    Science.gov (United States)

    Vincenti, Henri

    2016-03-01

    The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.

  14. Strategizing Communication

    DEFF Research Database (Denmark)

    Gulbrandsen, Ib Tunby; Just, Sine Nørholm

    not determine the success of strategic communication. Rather, contextual factors such as competition, technological developments, global cultural trends and local traditions as well as employees’ skills and attitudes will determine the organization’s communicative success. This holds true regardless...... and less on the plan to communicate. Against the backdrop of the comprehensive changes to communication in and about organizations brought about by the rise of digital communication technologies and related contextual developments, Strategizing Communication provides better and more up to date tools...

  15. LCODE: A parallel quasistatic code for computationally heavy problems of plasma wakefield acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Sosedkin, A.P.; Lotov, K.V. [Budker Institute of Nuclear Physics SB RAS, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, 630090 Novosibirsk (Russian Federation)

    2016-09-01

    LCODE is a freely distributed quasistatic 2D3V code for simulating plasma wakefield acceleration, mainly specialized at resource-efficient studies of long-term propagation of ultrarelativistic particle beams in plasmas. The beam is modeled with fully relativistic macro-particles in a simulation window copropagating with the light velocity; the plasma can be simulated with either kinetic or fluid model. Several techniques are used to obtain exceptional numerical stability and precision while maintaining high resource efficiency, enabling LCODE to simulate the evolution of long particle beams over long propagation distances even on a laptop. A recent upgrade enabled LCODE to perform the calculations in parallel. A pipeline of several LCODE processes communicating via MPI (Message‐Passing Interface) is capable of executing multiple consecutive time steps of the simulation in a single pass. This approach can speed up the calculations by hundreds of times.

  16. LCODE: A parallel quasistatic code for computationally heavy problems of plasma wakefield acceleration

    Science.gov (United States)

    Sosedkin, A. P.; Lotov, K. V.

    2016-09-01

    LCODE is a freely distributed quasistatic 2D3V code for simulating plasma wakefield acceleration, mainly specialized at resource-efficient studies of long-term propagation of ultrarelativistic particle beams in plasmas. The beam is modeled with fully relativistic macro-particles in a simulation window copropagating with the light velocity; the plasma can be simulated with either kinetic or fluid model. Several techniques are used to obtain exceptional numerical stability and precision while maintaining high resource efficiency, enabling LCODE to simulate the evolution of long particle beams over long propagation distances even on a laptop. A recent upgrade enabled LCODE to perform the calculations in parallel. A pipeline of several LCODE processes communicating via MPI (Message-Passing Interface) is capable of executing multiple consecutive time steps of the simulation in a single pass. This approach can speed up the calculations by hundreds of times.

  17. ActiWiz – optimizing your nuclide inventory at proton accelerators with a computer code

    CERN Document Server

    Vincke, Helmut

    2014-01-01

    When operating an accelerator one always faces unwanted, but inevitable beam losses. These result in activation of adjacent material, which in turn has an obvious impact on safety and handling constraints. One of the key parameters responsible for activation is the chemical composition of the material which often can be optimized in that respect. In order to facilitate this task also for non-expert users the ActiWiz software has been developed at CERN. Based on a large amount of generic FLUKA Monte Carlo simulations the software applies a specifically developed risk assessment model to provide support to decision makers especially during the design phase as well as common operational work in the domain of radiation protection.

  18. LCODE: a parallel quasistatic code for computationally heavy problems of plasma wakefield acceleration

    CERN Document Server

    Sosedkin, Alexander

    2015-01-01

    LCODE is a freely-distributed quasistatic 2D3V code for simulating plasma wakefield acceleration, mainly specialized at resource-efficient studies of long-term propagation of ultrarelativistic particle beams in plasmas. The beam is modeled with fully relativistic macro-particles in a simulation window copropagating with the light velocity; the plasma can be simulated with either kinetic or fluid model. Several techniques are used to obtain exceptional numerical stability and precision while maintaining high resource efficiency, enabling LCODE to simulate the evolution of long particle beams over long propagation distances even on a laptop. A recent upgrade enabled LCODE to perform the calculations in parallel. A pipeline of several LCODE processes communicating via MPI (Message-Passing Interface) is capable of executing multiple consecutive time steps of the simulation in a single pass. This approach can speed up the calculations by hundreds of times.

  19. Design Tools for Accelerating Development and Usage of Multi-Core Computing Platforms

    Science.gov (United States)

    2014-04-01

    e.g., see [Bhattacharyya 2013]). Through their connections to computation graphs [ Karp 1966] and Kahn process networks [Kahn 1974, Lee 1995...parallel programming. In Proceedings of the IFIP Congress, 1974. [ Karp 1966] R. M. Karp and R. E. Miller. Properties of a model for parallel

  20. Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation

    DEFF Research Database (Denmark)

    Picchini, Umberto; Forman, Julie Lyng

    2016-01-01

    In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers...... a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm...... applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general...

  1. Computer-aided molecular design of solvents for accelerated reaction kinetics.

    Science.gov (United States)

    Struebing, Heiko; Ganase, Zara; Karamertzanis, Panagiotis G; Siougkrou, Eirini; Haycock, Peter; Piccione, Patrick M; Armstrong, Alan; Galindo, Amparo; Adjiman, Claire S

    2013-11-01

    Solvents can significantly alter the rates and selectivity of liquid-phase organic reactions, often hindering the development of new synthetic routes or, if chosen wisely, facilitating routes by improving rates and selectivities. To address this challenge, a systematic methodology is proposed that quickly identifies improved reaction solvents by combining quantum mechanical computations of the reaction rate constant in a few solvents with a computer-aided molecular design (CAMD) procedure. The approach allows the identification of a high-performance solvent within a very large set of possible molecules. The validity of our CAMD approach is demonstrated through application to a classical nucleophilic substitution reaction for the study of solvent effects, the Menschutkin reaction. The results were validated successfully by in situ kinetic experiments. A space of 1,341 solvents was explored in silico, but required quantum-mechanical calculations of the rate constant in only nine solvents, and uncovered a solvent that increases the rate constant by 40%.

  2. Prediction of peak ground acceleration of Iran's tectonic regions using a hybrid soft computing technique

    Directory of Open Access Journals (Sweden)

    Mostafa Gandomi

    2016-01-01

    Full Text Available A new model is derived to predict the peak ground acceleration (PGA utilizing a hybrid method coupling artificial neural network (ANN and simulated annealing (SA, called SA-ANN. The proposed model relates PGA to earthquake source to site distance, earthquake magnitude, average shear-wave velocity, faulting mechanisms, and focal depth. A database of strong ground-motion recordings of 36 earthquakes, which happened in Iran's tectonic regions, is used to establish the model. For more validity verification, the SA-ANN model is employed to predict the PGA of a part of the database beyond the training data domain. The proposed SA-ANN model is compared with the simple ANN in addition to 10 well-known models proposed in the literature. The proposed model performance is superior to the single ANN and other existing attenuation models. The SA-ANN model is highly correlated to the actual records (R = 0.835 and ρ = 0.0908 and it is subsequently converted into a tractable design equation.

  3. Jacobian-free Newton-Krylov methods with GPU acceleration for computing nonlinear ship wave patterns

    CERN Document Server

    Pethiyagoda, Ravindra; Moroney, Timothy J; Back, Julian M

    2014-01-01

    The nonlinear problem of steady free-surface flow past a submerged source is considered as a case study for three-dimensional ship wave problems. Of particular interest is the distinctive wedge-shaped wave pattern that forms on the surface of the fluid. By reformulating the governing equations with a standard boundary-integral method, we derive a system of nonlinear algebraic equations that enforce a singular integro-differential equation at each midpoint on a two-dimensional mesh. Our contribution is to solve the system of equations with a Jacobian-free Newton-Krylov method together with a banded preconditioner that is carefully constructed with entries taken from the Jacobian of the linearised problem. Further, we are able to utilise graphics processing unit acceleration to significantly increase the grid refinement and decrease the run-time of our solutions in comparison to schemes that are presently employed in the literature. Our approach provides opportunities to explore the nonlinear features of three-...

  4. Accelerating root system phenotyping of seedlings through a computer-assisted processing pipeline.

    Science.gov (United States)

    Dupuy, Lionel X; Wright, Gladys; Thompson, Jacqueline A; Taylor, Anna; Dekeyser, Sebastien; White, Christopher P; Thomas, William T B; Nightingale, Mark; Hammond, John P; Graham, Neil S; Thomas, Catherine L; Broadley, Martin R; White, Philip J

    2017-01-01

    There are numerous systems and techniques to measure the growth of plant roots. However, phenotyping large numbers of plant roots for breeding and genetic analyses remains challenging. One major difficulty is to achieve high throughput and resolution at a reasonable cost per plant sample. Here we describe a cost-effective root phenotyping pipeline, on which we perform time and accuracy benchmarking to identify bottlenecks in such pipelines and strategies for their acceleration. Our root phenotyping pipeline was assembled with custom software and low cost material and equipment. Results show that sample preparation and handling of samples during screening are the most time consuming task in root phenotyping. Algorithms can be used to speed up the extraction of root traits from image data, but when applied to large numbers of images, there is a trade-off between time of processing the data and errors contained in the database. Scaling-up root phenotyping to large numbers of genotypes will require not only automation of sample preparation and sample handling, but also efficient algorithms for error detection for more reliable replacement of manual interventions.

  5. Adjacency-Based Data Reordering Algorithm for Acceleration of Finite Element Computations

    Directory of Open Access Journals (Sweden)

    Min Zhou

    2010-01-01

    Full Text Available Effective use of the processor memory hierarchy is an important issue in high performance computing. In this work, a part level mesh topological traversal algorithm is used to define a reordering of both mesh vertices and regions that increases the spatial locality of data and improves overall cache utilization during on processor finite element calculations. Examples based on adaptively created unstructured meshes are considered to demonstrate the effectiveness of the procedure in cases where the load per processing core is varied but balanced (e.g., elements are equally distributed across cores for a given partition. In one example, the effect of the current ajacency-based data reordering is studied for different phases of an implicit analysis including element-data blocking, element-level computations, sparse-matrix filling and equation solution. These results are compared to a case where reordering is applied to mesh vertices only. The computations are performed on various supercomputers including IBM Blue Gene (BG/L and BG/P, Cray XT (XT3 and XT5 and Sun Constellation Cluster. It is observed that reordering improves the per-core performance by up to 24% on Blue Gene/L and up to 40% on Cray XT5. The CrayPat hardware performance tool is used to measure the number of cache misses across each level of the memory hierarchy. It is determined that the measured decrease in L1, L2 and L3 cache misses when data reordering is used, closely accounts for the observed decrease in the overall execution time.

  6. Investigation of acceleration effects on missile aerodynamics using computational fluid dynamics

    CSIR Research Space (South Africa)

    Gledhill, Irvy MA

    2009-01-01

    Full Text Available ) or upwind TVD flux difference splitting. An explicit Runge-Kutta local time-stepping is used for steady state calculations, and an implicit time-integration with dual time-stepping is used for the time accurate computations. To enhance the convergence... the slip airfoil surface in the dimensions modelled. A second order central difference scheme was used with Jameson dissipation [14], [13]. An implicit five stage Runge-Kutta scheme with backward Euler time differencing, 5 W-cycle multi-grid levels...

  7. Using the fast fourier transform to accelerate the computational search for RNA conformational switches.

    Directory of Open Access Journals (Sweden)

    Evan Senter

    Full Text Available Using complex roots of unity and the Fast Fourier Transform, we design a new thermodynamics-based algorithm, FFTbor, that computes the Boltzmann probability that secondary structures differ by [Formula: see text] base pairs from an arbitrary initial structure of a given RNA sequence. The algorithm, which runs in quartic time O(n(4 and quadratic space O(n(2, is used to determine the correlation between kinetic folding speed and the ruggedness of the energy landscape, and to predict the location of riboswitch expression platform candidates. A web server is available at http://bioinformatics.bc.edu/clotelab/FFTbor/.

  8. Nonlinear Site Response Due to Large Ground Acceleration: Observation and Computer Simulation

    Science.gov (United States)

    Noguchi, S.; Furumura, T.; Sasatani, T.

    2009-12-01

    We studied nonlinear site response due to large ground acceleration during the 2003 off-Miyagi Earthquake (Mw7.0) in Japan by means of horizontal-to-vertical spectral ratio analysis of S-wave motion. The results were then confirmed by finite-difference method (FDM) simulation of nonlinear seismic wave propagation. A nonlinear site response is often observed at soft sediment sites, and even at hard bedrock sites which are covered by thin soil layers. Nonlinear site response can be induced by strong ground motion whose peak ground acceleration (PGA) exceeds about 100 cm/s/s, and seriously affects the amplification of high frequency ground motion and PGA. Noguchi and Sasatani (2008) developed an efficient technique for quantitative evaluation of nonlinear site response using the horizontal-to-vertical spectral ratio of S-wave (S-H/V) derived from strong ground motion records, based on Wen et al. (2006). We applied this technique to perform a detailed analysis of the properties of nonlinear site response based on a large amount of data recorded at 132 K-NET and KiK-net strong motion stations in Northern Japan during the off-Miyagi Earthquake. We succeeded in demonstrating a relationship between ground motion level, nonlinear site response and surface soil characteristics. For example, the seismic data recorded at KiK-net IWTH26 showed obvious characteristics of nonlinear site response when the PGA exceeded 100 cm/s/s. As the ground motion level increased, the dominant peak of S-H/V shifted to lower frequency, the high frequency level of S-H/V dropped, and PGA amplification decreased. On the other hand, the records at MYGH03 seemed not to be affected by nonlinear site response even for high ground motion levels in which PGA exceeds 800 cm/s/s. The characteristics of such nonlinear site amplification can be modeled by evaluating Murnaghan constants (e.g. McCall, 1994), which are the third-order elastic constants. In order to explain the observed characteristics of

  9. Computational and experimental advances in drug repositioning for accelerated therapeutic stratification.

    Science.gov (United States)

    Shameer, Khader; Readhead, Ben; Dudley, Joel T

    2015-01-01

    Drug repositioning is an important component of therapeutic stratification in the precision medicine paradigm. Molecular profiling and more sophisticated analysis of longitudinal clinical data are refining definitions of human diseases, creating needs and opportunities to re-target or reposition approved drugs for alternative indications. Drug repositioning studies have demonstrated success in complex diseases requiring improved therapeutic interventions as well as orphan diseases without any known treatments. An increasing collection of available computational and experimental methods that leverage molecular and clinical data enable diverse drug repositioning strategies. Integration of translational bioinformatics resources, statistical methods, chemoinformatics tools and experimental techniques (including medicinal chemistry techniques) can enable the rapid application of drug repositioning on an increasingly broad scale. Efficient tools are now available for systematic drug-repositioning methods using large repositories of compounds with biological activities. Medicinal chemists along with other translational researchers can play a key role in various aspects of drug repositioning. In this review article, we briefly summarize the history of drug repositioning, explain concepts behind drug repositioning methods, discuss recent computational and experimental advances and highlight available open access resources for effective drug repositioning investigations. We also discuss recent approaches in utilizing electronic health record for outcome assessment of drug repositioning and future avenues of drug repositioning in the light of targeting disease comorbidities, underserved patient communities, individualized medicine and socioeconomic impact.

  10. Strategic Imagination: The Lost Dimension of Strategic Studies.

    Science.gov (United States)

    1984-09-01

    AD-A±52 ±5± STRATEGIC IMAGINATION: THE LOST DIMENSION OF STRATEGIC 1/1 STUDIES(U) NAVAL POSTGRADUATE SCHOOL MONTEREY CA E S VERGER SEP 84...30 2. Wargaming Past Battles............32 3. Manual War Games..............32 4. Machine Games...............33 D5. Computer Assisted Games...may gain an insight and understanding of the problems of the commanders in the field and a glimpse of the military thinking of the time. 3. Manual War

  11. An accelerated conjugate gradient algorithm to compute low-lying eigenvalues a study for the Dirac operator in SU(2) lattice QCD

    CERN Document Server

    Kalkreuter, T; Kalkreuter, Thomas; Simma, Hubert

    1995-01-01

    The low-lying eigenvalues of a (sparse) hermitian matrix can be computed with controlled numerical errors by a conjugate gradient (CG) method. This CG algorithm is accelerated by alternating it with exact diagonalisations in the subspace spanned by the numerically computed eigenvectors. We study this combined algorithm in case of the Dirac operator with (dynamical) Wilson fermions in four-dimensional \\SUtwo gauge fields. The algorithm is numerically very stable and can be parallelized in an efficient way. On lattices of sizes 4^4-16^4 an acceleration of the pure CG method by a factor of~4-8 is found.

  12. Strategic Aspirations

    DEFF Research Database (Denmark)

    Christensen, Lars Thøger; Morsing, Mette; Thyssen, Ole

    2016-01-01

    Strategic aspirations are public announcements designed to inspire, motivate, and create expectations about the future. Vision statements or value declarations are examples of such talk, through which organizations announce their ideal selves and declare what they (intend to) do. While aspirations...... aspirations, in other words, have exploratory and inspirational potential—two features that are highly essential in complex areas such as sustainability and CSR. This entry takes a communicative focus on strategic aspirations, highlighting the value of aspirational talk, understood as ideals and intentions...

  13. Tempest: Accelerated MS/MS Database Search Software for Heterogeneous Computing Platforms.

    Science.gov (United States)

    Adamo, Mark E; Gerber, Scott A

    2016-09-07

    MS/MS database search algorithms derive a set of candidate peptide sequences from in silico digest of a protein sequence database, and compute theoretical fragmentation patterns to match these candidates against observed MS/MS spectra. The original Tempest publication described these operations mapped to a CPU-GPU model, in which the CPU (central processing unit) generates peptide candidates that are asynchronously sent to a discrete GPU (graphics processing unit) to be scored against experimental spectra in parallel. The current version of Tempest expands this model, incorporating OpenCL to offer seamless parallelization across multicore CPUs, GPUs, integrated graphics chips, and general-purpose coprocessors. Three protocols describe how to configure and run a Tempest search, including discussion of how to leverage Tempest's unique feature set to produce optimal results. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.

  14. Uncertainty quantification an accelerated course with advanced applications in computational engineering

    CERN Document Server

    Soize, Christian

    2017-01-01

    This book presents the fundamental notions and advanced mathematical tools in the stochastic modeling of uncertainties and their quantification for large-scale computational models in sciences and engineering. In particular, it focuses in parametric uncertainties, and non-parametric uncertainties with applications from the structural dynamics and vibroacoustics of complex mechanical systems, from micromechanics and multiscale mechanics of heterogeneous materials. Resulting from a course developed by the author, the book begins with a description of the fundamental mathematical tools of probability and statistics that are directly useful for uncertainty quantification. It proceeds with a well carried out description of some basic and advanced methods for constructing stochastic models of uncertainties, paying particular attention to the problem of calibrating and identifying a stochastic model of uncertainty when experimental data is available. < This book is intended to be a graduate-level textbook for stu...

  15. Computational acceleration of orbital neutral sensor ionizer simulation through phenomena separation

    Science.gov (United States)

    Font, Gabriel I.

    2016-07-01

    Simulation of orbital phenomena is often difficult because of the non-continuum nature of the flow, which forces the use of particle methods, and the disparate time scales, which make long run times necessary. In this work, the computational work load has been reduced by taking advantage of the low number of collisions between different species. This allows each population of particles to be brought into convergence separately using a time step size optimized for its particular motion. The converged populations are then brought together to simulate low probability phenomena, such as ionization or excitation, on much longer time scales. The result of this technique has the effect of reducing run times by a factor of 103-104. The technique was applied to the simulation of a low earth orbit neutral species sensor with an ionizing element. Comparison with laboratory experiments of ion impacts generated by electron flux shows very good agreement.

  16. From variability tolerance to approximate computing in parallel integrated architectures and accelerators

    CERN Document Server

    Rahimi, Abbas; Gupta, Rajesh K

    2017-01-01

    This book focuses on computing devices and their design at various levels to combat variability. The authors provide a review of key concepts with particular emphasis on timing errors caused by various variability sources. They discuss methods to predict and prevent, detect and correct, and finally conditions under which such errors can be accepted; they also consider their implications on cost, performance and quality. Coverage includes a comparative evaluation of methods for deployment across various layers of the system from circuits, architecture, to application software. These can be combined in various ways to achieve specific goals related to observability and controllability of the variability effects, providing means to achieve cross layer or hybrid resilience. · Covers challenges and opportunities in identifying microelectronic variability and the resulting errors at various layers in the system abstraction; · Enables readers to assess how various levels of circuit and system design can mitigate t...

  17. Isosurface Computation Made Simple: Hardware acceleration,Adaptive Refinement and tetrahedral Stripping

    Energy Technology Data Exchange (ETDEWEB)

    Pascucci, V

    2004-02-18

    This paper presents a simple approach for rendering isosurfaces of a scalar field. Using the vertex programming capability of commodity graphics cards, we transfer the cost of computing an isosurface from the Central Processing Unit (CPU), running the main application, to the Graphics Processing Unit (GPU), rendering the images. We consider a tetrahedral decomposition of the domain and draw one quadrangle (quad) primitive per tetrahedron. A vertex program transforms the quad into the piece of isosurface within the tetrahedron (see Figure 2). In this way, the main application is only devoted to streaming the vertices of the tetrahedra from main memory to the graphics card. For adaptively refined rectilinear grids, the optimization of this streaming process leads to the definition of a new 3D space-filling curve, which generalizes the 2D Sierpinski curve used for efficient rendering of triangulated terrains. We maintain the simplicity of the scheme when constructing view-dependent adaptive refinements of the domain mesh. In particular, we guarantee the absence of T-junctions by satisfying local bounds in our nested error basis. The expensive stage of fixing cracks in the mesh is completely avoided. We discuss practical tradeoffs in the distribution of the workload between the application and the graphics hardware. With current GPU's it is convenient to perform certain computations on the main CPU. Beyond the performance considerations that will change with the new generations of GPU's this approach has the major advantage of avoiding completely the storage in memory of the isosurface vertices and triangles.

  18. CudaPre3D: An Alternative Preprocessing Algorithm for Accelerating 3D Convex Hull Computation on the GPU

    Directory of Open Access Journals (Sweden)

    MEI, G.

    2015-05-01

    Full Text Available In the calculating of convex hulls for point sets, a preprocessing procedure that is to filter the input points by discarding non-extreme points is commonly used to improve the computational efficiency. We previously proposed a quite straightforward preprocessing approach for accelerating 2D convex hull computation on the GPU. In this paper, we extend that algorithm to being used in 3D cases. The basic ideas behind these two preprocessing algorithms are similar: first, several groups of extreme points are found according to the original set of input points and several rotated versions of the input set; then, a convex polyhedron is created using the found extreme points; and finally those interior points locating inside the formed convex polyhedron are discarded. Experimental results show that: when employing the proposed preprocessing algorithm, it achieves the speedups of about 4x on average and 5x to 6x in the best cases over the cases where the proposed approach is not used. In addition, more than 95 percent of the input points can be discarded in most experimental tests.

  19. Proceedings of the Strategic Computing Natural Language Workshop Held in Marina del Rey, California on 1-2 May 1986.

    Science.gov (United States)

    1986-05-01

    in the new battle group which Spivak has?> 5. System: Yes. <Yes.> 6. System: Do you want me to display the read status of the messages? <Do you want...Discourse, Michael Brady and Robert C. Berwick (ed.), 1983. 126 SECTION 4: RESEARCH CONTRIBUTIONS SI International COMMONSENSE METAPHYSICS AND LEXICAL...In Computational Models of Discour8e, Michael Brady and Robert Berwick, Ed.,MIT Press, Cambridge, Ma, 1083, pp. 267-330. [8] Tversky, A. "Features of

  20. Computer simulations predict that chromosome movements and rotations accelerate mitotic spindle assembly without compromising accuracy.

    Science.gov (United States)

    Paul, Raja; Wollman, Roy; Silkworth, William T; Nardi, Isaac K; Cimini, Daniela; Mogilner, Alex

    2009-09-15

    The mitotic spindle self-assembles in prometaphase by a combination of centrosomal pathway, in which dynamically unstable microtubules search in space until chromosomes are captured, and a chromosomal pathway, in which microtubules grow from chromosomes and focus to the spindle poles. Quantitative mechanistic understanding of how spindle assembly can be both fast and accurate is lacking. Specifically, it is unclear how, if at all, chromosome movements and combining the centrosomal and chromosomal pathways affect the assembly speed and accuracy. We used computer simulations and high-resolution microscopy to test plausible pathways of spindle assembly in realistic geometry. Our results suggest that an optimal combination of centrosomal and chromosomal pathways, spatially biased microtubule growth, and chromosome movements and rotations is needed to complete prometaphase in 10-20 min while keeping erroneous merotelic attachments down to a few percent. The simulations also provide kinetic constraints for alternative error correction mechanisms, shed light on the dual role of chromosome arm volume, and compare well with experimental data for bipolar and multipolar HT-29 colorectal cancer cells.

  1. Strategic Bonding.

    Science.gov (United States)

    Davis, Lynn; Tyson, Ben

    2003-01-01

    Many school buildings are in dire need of renovation, expansion, or replacement. Brief case studies from around the country illustrate the importance of finding out why people vote for or against a construction referendum. Lists recommendations for a strategic campaign. (MLF)

  2. Strategic Staffing

    Science.gov (United States)

    Clark, Ann B.

    2012-01-01

    Business and industry leaders do not flinch at the idea of placing top talent in struggling departments and divisions. This is not always the case in public education. The Charlotte-Mecklenburg Schools made a bold statement to its community in its strategic plan by identifying two key reform levers--(1) an effective principal leading each school;…

  3. Computational Approach for Securing Radiology-Diagnostic Data in Connected Health Network using High-Performance GPU-Accelerated AES.

    Science.gov (United States)

    Adeshina, A M; Hashim, R

    2017-03-01

    Diagnostic radiology is a core and integral part of modern medicine, paving ways for the primary care physicians in the disease diagnoses, treatments and therapy managements. Obviously, all recent standard healthcare procedures have immensely benefitted from the contemporary information technology revolutions, apparently revolutionizing those approaches to acquiring, storing and sharing of diagnostic data for efficient and timely diagnosis of diseases. Connected health network was introduced as an alternative to the ageing traditional concept in healthcare system, improving hospital-physician connectivity and clinical collaborations. Undoubtedly, the modern medicinal approach has drastically improved healthcare but at the expense of high computational cost and possible breach of diagnosis privacy. Consequently, a number of cryptographical techniques are recently being applied to clinical applications, but the challenges of not being able to successfully encrypt both the image and the textual data persist. Furthermore, processing time of encryption-decryption of medical datasets, within a considerable lower computational cost without jeopardizing the required security strength of the encryption algorithm, still remains as an outstanding issue. This study proposes a secured radiology-diagnostic data framework for connected health network using high-performance GPU-accelerated Advanced Encryption Standard. The study was evaluated with radiology image datasets consisting of brain MR and CT datasets obtained from the department of Surgery, University of North Carolina, USA, and the Swedish National Infrastructure for Computing. Sample patients' notes from the University of North Carolina, School of medicine at Chapel Hill were also used to evaluate the framework for its strength in encrypting-decrypting textual data in the form of medical report. Significantly, the framework is not only able to accurately encrypt and decrypt medical image datasets, but it also

  4. Strategic Marketing

    OpenAIRE

    Potter, Ned

    2015-01-01

    This chapter from The Library Marketing Toolkit focuses on marketing strategy. Marketing is more successful when it happens as part of a constantly-renewing cycle. The aim of this chapter is to demystify the process of strategic marketing, simplifying it into seven key stages with advice on how to implement each one. Particular emphasis is put on dividing your audience and potential audience into segments, and marketing different messages to each group. \\ud \\ud It includes case studies from T...

  5. Accelerated Aging of BKC 44306-10 Rigid Polyurethane Foam: FT-IR Spectroscopy, Dimensional Analysis, and Micro Computed Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Gilbertson, Robert D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Patterson, Brian M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Smith, Zachary [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-01-02

    An accelerated aging study of BKC 44306-10 rigid polyurethane foam was carried out. Foam samples were aged in a nitrogen atmosphere at three different temperatures: 50 °C, 65 °C, and 80 °C. Foam samples were periodically removed from the aging canisters at 1, 3, 6, 9, 12, and 15 month intervals when FT-IR spectroscopy, dimensional analysis, and mechanical testing experiments were performed. Micro Computed Tomography imaging was also employed to study the morphology of the foams. Over the course of the aging study the foams the decreased in size by a magnitude of 0.001 inches per inch of foam. Micro CT showed the heterogeneous nature of the foam structure likely resulting from flow effects during the molding process. The effect of aging on the compression and tensile strength of the foam was minor and no cause for concern. FT-IR spectroscopy was used to follow the foam chemistry. However, it was difficult to draw definitive conclusions about the changes in chemical nature of the materials due to large variability throughout the samples.

  6. OpenVX-based Python Framework for real-time cross platform acceleration of embedded computer vision applications

    Directory of Open Access Journals (Sweden)

    Ori Heimlich

    2016-11-01

    Full Text Available Embedded real-time vision applications are being rapidly deployed in a large realm of consumer electronics, ranging from automotive safety to surveillance systems. However, the relatively limited computational power of embedded platforms is considered as a bottleneck for many vision applications, necessitating optimization. OpenVX is a standardized interface, released in late 2014, in an attempt to provide both system and kernel level optimization to vision applications. With OpenVX, Vision processing are modeled with coarse-grained data flow graphs, which can be optimized and accelerated by the platform implementer. Current full implementations of OpenVX are given in the programming language C, which does not support advanced programming paradigms such as object-oriented, imperative and functional programming, nor does it have runtime or type-checking. Here we present a python-based full Implementation of OpenVX, which eliminates much of the discrepancies between the object-oriented paradigm used by many modern applications and the native C implementations. Our open-source implementation can be used for rapid development of OpenVX applications in embedded platforms. Demonstration includes static and real-time image acquisition and processing using a Raspberry Pi and a GoPro camera. Code is given as supplementary information. Code project and linked deployable virtual machine are located on GitHub: https://github.com/NBEL-lab/PythonOpenVX.

  7. Implementation on GPU-based acceleration of the m-line reconstruction algorithm for circle-plus-line trajectory computed tomography

    Science.gov (United States)

    Li, Zengguang; Xi, Xiaoqi; Han, Yu; Yan, Bin; Li, Lei

    2016-10-01

    The circle-plus-line trajectory satisfies the exact reconstruction data sufficiency condition, which can be applied in C-arm X-ray Computed Tomography (CT) system to increase reconstruction image quality in a large cone angle. The m-line reconstruction algorithm is adopted for this trajectory. The selection of the direction of m-lines is quite flexible and the m-line algorithm needs less data for accurate reconstruction compared with FDK-type algorithms. However, the computation complexity of the algorithm is very large to obtain efficient serial processing calculations. The reconstruction speed has become an important issue which limits its practical applications. Therefore, the acceleration of the algorithm has great meanings. Compared with other hardware accelerations, the graphics processing unit (GPU) has become the mainstream in the CT image reconstruction. GPU acceleration has achieved a better acceleration effect in FDK-type algorithms. But the implementation of the m-line algorithm's acceleration for the circle-plus-line trajectory is different from the FDK algorithm. The parallelism of the circular-plus-line algorithm needs to be analyzed to design the appropriate acceleration strategy. The implementation can be divided into the following steps. First, selecting m-lines to cover the entire object to be rebuilt; second, calculating differentiated back projection of the point on the m-lines; third, performing Hilbert filtering along the m-line direction; finally, the m-line reconstruction results need to be three-dimensional-resembled and then obtain the Cartesian coordinate reconstruction results. In this paper, we design the reasonable GPU acceleration strategies for each step to improve the reconstruction speed as much as possible. The main contribution is to design an appropriate acceleration strategy for the circle-plus-line trajectory m-line reconstruction algorithm. Sheep-Logan phantom is used to simulate the experiment on a single K20 GPU. The

  8. Computation of the head-related transfer function via the fast multipole accelerated boundary element method and its spherical harmonic representation.

    Science.gov (United States)

    Gumerov, Nail A; O'Donovan, Adam E; Duraiswami, Ramani; Zotkin, Dmitry N

    2010-01-01

    The head-related transfer function (HRTF) is computed using the fast multipole accelerated boundary element method. For efficiency, the HRTF is computed using the reciprocity principle by placing a source at the ear and computing its field. Analysis is presented to modify the boundary value problem accordingly. To compute the HRTF corresponding to different ranges via a single computation, a compact and accurate representation of the HRTF, termed the spherical spectrum, is developed. Computations are reduced to a two stage process, the computation of the spherical spectrum and a subsequent evaluation of the HRTF. This representation allows easy interpolation and range extrapolation of HRTFs. HRTF computations are performed for the range of audible frequencies up to 20 kHz for several models including a sphere, human head models [the Neumann KU-100 ("Fritz") and the Knowles KEMAR ("Kemar") manikins], and head-and-torso model (the Kemar manikin). Comparisons between the different cases are provided. Comparisons with the computational data of other authors and available experimental data are conducted and show satisfactory agreement for the frequencies for which reliable experimental data are available. Results show that, given a good mesh, it is feasible to compute the HRTF over the full audible range on a regular personal computer.

  9. Look enterprise information technology strategic planning SOA and cloud computing technology integration%看企业信息化战略规划SOA和云计算技术的融入

    Institute of Scientific and Technical Information of China (English)

    牛昊天

    2014-01-01

    Application of SOA and cloud computing enterprise information technology services strategic planning is to support companies achieve their strategic planning purposes established strong measures for business process innovation,improve operational efficiency and reduce operating costs of information has a positive meaning.This paper analyzes the enterprise information problems in strategic planning and strategic planning ideas,on the advantages and limitations of SOA and cloud computing technologies are analyzed,based on the design of both the advantages of SOA and cloud computing technology integration structures to more good service in the construction of enterprise information.%应用SOA和云计算技术服务企业信息化战略规划制定,是支持企业实现自身既定发展战略规划目的的强有力措施,对于企业业务流程创新、提升运营效率、降低信息运作成本有积极意义。本文分析了企业信息化战略规划中存在的问题和战略规划思路,对SOA技术和云计算技术的优势与局限性进行了分析,立足于二者优势设计了SOA和云计算技术融合结构,以更好的服务于企业信息化建设。

  10. Thinking strategically.

    Science.gov (United States)

    Goree, Michael

    2002-01-01

    Over the course of the past 20 years, human resources has tried a variety of strategic initiatives to add value to the working environment, from the alphabets of TQM, CQI, EVA, ROI, ISO, QS, Theory X, Y, Z, Generation X and Y to re-engineering, balanced scorecard, lean, hoshin, six sigma, to Margaret Wheatley's "The Simpler Way" and finally to cheese and fish. The problem is that none of these is a strategy. They are all tactics to accomplish or achieve a strategy.

  11. Strategic Windows

    DEFF Research Database (Denmark)

    Risberg, Annette; King, David R.; Meglio, Olimpia

    We examine the importance of speed and timing in acquisitions with a framework that identifies management considerations for three interrelated acquisition phases (selection, deal closure and integration) from an acquiring firm’s perspective. Using a process perspective, we pinpoint items within...... acquisition phases that relate to speed. In particular, we present the idea of time-bounded strategic windows in acquisitions consistent with the notion of kairòs, where opportunities appear and must be pursued at the right time for success to occur....

  12. Strategic Management

    CERN Document Server

    Jeffs, Chris

    2008-01-01

    The Sage Course Companion on Strategic Management is an accessible introduction to the subject that avoids lengthy debate in order to focus on the core concepts. It will help the reader to develop their understanding of the key theories, whilst enabling them to bring diverse topics together in line with course requirements. The Sage Course Companion also provides advice on getting the most from your course work; help with analysing case studies and tips on how to prepare for examinations. Designed to compliment existing strategy textbooks, the Companion provides: -Quick and easy access to the

  13. Strategic Engagement

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    “Pakistan regards China as a strategic partner and the bilateral ties have endured the test of time.”Pakistani Prime Minister Shaukat Aziz made the comment during his four-day official visit to China on April 16 when he met Chinese President Hu Jintao,Premier Wen Jiabao and the NPC Standing Committee Chairman Wu Bangguo.His visit to China also included a trip to Boao,where he delivered a keynote speech at the Boao Forum for Asia held on April 20-22. During his stay in Beijing,the two countries signed 13 agreements on cooperation in the fields of space,telecommunications,education and legal assistance,which enhanced an already close strategic partnership. In an interview with Beijing Review reporter Pan Shuangqin,Prime Minister Aziz addressed a number of issues ranging from Asia’s searching for a win-win economic situation to the influence of Sino-Pakistani relations on regional peace.

  14. Strategic conversation

    Directory of Open Access Journals (Sweden)

    Nicholas Asher

    2013-08-01

    Full Text Available Models of conversation that rely on a strong notion of cooperation don’t apply to strategic conversation — that is, to conversation where the agents’ motives don’t align, such as courtroom cross examination and political debate. We provide a game-theoretic framework that provides an analysis of both cooperative and strategic conversation. Our analysis features a new notion of safety that applies to implicatures: an implicature is safe when it can be reliably treated as a matter of public record. We explore the safety of implicatures within cooperative and non cooperative settings. We then provide a symbolic model enabling us (i to prove a correspondence result between a characterisation of conversation in terms of an alignment of players’ preferences and one where Gricean principles of cooperative conversation like Sincerity hold, and (ii to show when an implicature is safe and when it is not. http://dx.doi.org/10.3765/sp.6.2 BibTeX info

  15. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    Science.gov (United States)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  16. Strategic Planning: What's so Strategic about It?

    Science.gov (United States)

    Strong, Bart

    2005-01-01

    The words "strategic" and "planning" used together can lead to confusion unless one spent the early years of his career in never-ending, team-oriented, corporate training sessions. Doesn't "strategic" have something to do with extremely accurate bombing or a defensive missile system or Star Wars or something? Don't "strategic" and "planning" both…

  17. Case Studies in Strategic Planning

    Science.gov (United States)

    1990-03-06

    Contains developed case studies in strategic planning on The Navy General Board, Joint Service War Planning 1919 to 1941, Navy Strategic Planning , NASA...in Strategic Planning NPS-56-88-031-PR of September 1988. Strategic planning , Strategic Management.

  18. Strategic serendipity

    DEFF Research Database (Denmark)

    Knudsen, Gry Høngsmark; Lemmergaard, Jeanette

    2014-01-01

    of – and communicative responses to – Kopenhagen Fur's campaign The World's Best – but not perfect in both broadcast media (e.g. print and television) and social media, more specifically Facebook. Through understanding how an organisation can plan for and take advantage of the unpredictable through state......This paper contributes to critical voices on the issue of strategic communication. It does so by exploring how an organisation can seize the moment of serendipity based on careful preparation of its issues management and communication channels. The focus of the study is the media coverage...... organisations as communicative actors can take advantage of the serendipity afforded by other actors' campaigns when advocating and campaigning....

  19. Experiences of How Developed Country Accelerated Scientific and Technological Achievements Transformation of Strategic Emerging Industry---Taking Electric Vehicles as an Example%发达国家加速战略性新兴产业科技成果转化的经验--以电动汽车为例

    Institute of Scientific and Technical Information of China (English)

    杨荣

    2013-01-01

    The strategic emerging industries are a highly challenging industry, improving the conversion rate of the results is an important goal to pursue.Taking electric vehicles as an example, the author analyzed how developed countries promoted strategic emerging industries and technological achievements into practice : special strategic planning, special laws and regula-tions, technical standards research, attention to research cooperation, carrying out demonstration operations, and vigorously pro-moting promotion, setting up a management and intermediary organizations, focusing on education and training of professional and technical personnel, etc. These practices experience to accelerate the development of strategic emerging industries, im-proved strategic emerging industries and technological achievements can have rate value for references.%战略性新兴产业是一个具有高度挑战性的产业,提高其成果的转化率是一个重要的追求目标。文章以电动汽车为例,分析了发达国家促使战略性新兴产业科技成果转化的做法经验,即制定专项战略规划、制定专门法律法规、开展技术标准研究、重视产学研合作、开展示范运营、大力宣传推广、设立管理和中介机构、注重专业技术人才教育培训等,这些做法经验对加快我国战略性新兴产业的发展,提高战略性新兴产业科技成果转化率有参考借鉴价值。

  20. Effects of dimensionality on computer simulations of laser-ion acceleration: When are three-dimensional simulations needed?

    Science.gov (United States)

    Yin, L.; Stark, D. J.; Albright, B. J.

    2016-10-01

    Laser-ion acceleration via relativistic induced transparency provides an effective means to accelerate ions to tens of MeV/nucleon over distances of 10s of μm. These ion sources may enable a host of applications, from fast ignition and x-rays sources to medical treatments. Understanding whether two-dimensional (2D) PIC simulations can capture the relevant 3D physics is important to the development of a predictive capability for short-pulse laser-ion acceleration and for economical design studies for applications of these accelerators. In this work, PIC simulations are performed in 3D and in 2D where the direction of the laser polarization is in the simulation plane (2D-P) and out-of-plane (2D-S). Our studies indicate modeling sensitivity to dimensionality and laser polarization. Differences arise in energy partition, electron heating, ion peak energy, and ion spectral shape. 2D-P simulations are found to over-predict electron heating and ion peak energy. The origin of these differences and the extent to which 2D simulations may capture the key acceleration dynamics will be discussed. Work performed under the auspices of the U.S. DOE by the LANS, LLC, Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396. Funding provided by the Los Alamos National Laboratory Directed Research and Development Program.

  1. 7 March 2013 -Stanford University Professor N. McKeown FREng, Electrical Engineering and Computer Science and B. Leslie, Creative Labs visiting CERN Control Centre and the LHC tunnel with Director for Accelerators and Technology S. Myers.

    CERN Multimedia

    Anna Pantelia

    2013-01-01

    7 March 2013 -Stanford University Professor N. McKeown FREng, Electrical Engineering and Computer Science and B. Leslie, Creative Labs visiting CERN Control Centre and the LHC tunnel with Director for Accelerators and Technology S. Myers.

  2. Evaluation of Computer-Aided System Design Tools for SDI (Strategic Defense Initiative) Battle Management/C3 (Command, Control and Communications) Architecture Development

    Science.gov (United States)

    1987-10-01

    Ft. Meade, MD 20755-6000 Lee Cooper 1 copy Advanced Technology 2121 Crystal Drive, Suite 200 Arlington, VA 22202 Larry Cox I copy TRW 1950 Sunwest...Richmond Street Providence, RI 02903 " - Mr. Larry Christina, Jr. 1 copy Technology Branch, CSSD-H-SBY Battle Management Division U.S. Army Strategic...copy ’_ Advanced System Architectures Johnson House 73-79 Park Street GU 15 3PE, United Kingdom s. - .%° N. % S CSED Review Panel Dr. Dan Alpert

  3. Computationally efficient dynamic modeling of robot manipulators with multiple flexible-links using acceleration-based discrete time transfer matrix method

    DEFF Research Database (Denmark)

    Zhang, Xuping; Sørensen, Rasmus; RahbekIversen, Mathias

    2018-01-01

    , and then is linearized based on the acceleration-based state vector. The transfer matrices for each type of components/elements are developed, and used to establish the system equations of a flexible robot manipulator by concatenating the state vector from the base to the end-effector. With this strategy, the size...... of the final system dynamic equations does not increase with the number of joints or the number of link beam elements that each link is decomposed. The developed method intends to avoid the traditional computation of the global system dynamic equations that usually have large size for flexible robot......This paper presents a novel and computationally efficient modeling method for the dynamics of flexible-link robot manipulators. In this method, a robot manipulator is decomposed into components/elements. The component/element dynamics is established using Newton–Euler equations...

  4. Learning without experience: Understanding the strategic implications of deregulation and competition in the electricity industry

    Energy Technology Data Exchange (ETDEWEB)

    Lomi, A. [School of Economics, University of Bologna, Bologna (Italy); Larsen, E.R. [Dept. of Managements Systems and Information, City University Business School, London (United Kingdom)

    1998-11-01

    As deregulation of the electricity industry continues to gain momentum around the world, electricity companies face unprecedented challenges. Competitive complexity and intensity will increase substantially as deregulated companies find themselves competing in new industries, with new rules, against unfamiliar competitors - and without any history to learn from. We describe the different kinds of strategic issues that newly deregulated utility companies are facing, and the risks that strategic issues implicate. We identify a number of problems induced by experiential learning under conditions of competence-destroying change, and we illustrate ways in which companies can activate history-independent learning processes. We suggest that Micro worlds - a new generation of computer-based learning environments made possible by conceptual and technological progress in the fields of system dynamics and systems thinking - are particularly appropriate tools to accelerate and enhance organizational and managerial learning under conditions of increased competitive complexity. (au)

  5. Moving Beyond Strategic Planning to Strategic Thinking.

    Science.gov (United States)

    Wolverton, Mimi; Gmelch, Walter

    1999-01-01

    Examines a moderately sized Washington school district's efforts to move beyond strategic planning as a segregated activity toward thinking strategically about long-term plans to govern both tactical operations and the district's future. Top management grew to recognize the legitimacy of multiple external and internal constituent claims. (25…

  6. Revisiting Strategic versus Non-strategic Cooperation

    NARCIS (Netherlands)

    Reuben, E.; Suetens, S.

    2009-01-01

    We use a novel experimental design to disentangle strategically- and non-strategically-motivated cooperation. By using contingent responses in a repeated sequential prisoners’ dilemma with a known probabilistic end, we differentiate end-game behavior from continuation behavior within individuals

  7. Revisiting Strategic versus Non-strategic Cooperation

    NARCIS (Netherlands)

    Reuben, E.; Suetens, S.

    2009-01-01

    We use a novel experimental design to disentangle strategically- and non-strategically-motivated cooperation. By using contingent responses in a repeated sequential prisoners’ dilemma with a known probabilistic end, we differentiate end-game behavior from continuation behavior within individuals whi

  8. Strategic information security

    CERN Document Server

    Wylder, John

    2003-01-01

    Introduction to Strategic Information SecurityWhat Does It Mean to Be Strategic? Information Security Defined The Security Professional's View of Information Security The Business View of Information SecurityChanges Affecting Business and Risk Management Strategic Security Strategic Security or Security Strategy?Monitoring and MeasurementMoving Forward ORGANIZATIONAL ISSUESThe Life Cycles of Security ManagersIntroductionThe Information Security Manager's Responsibilities The Evolution of Data Security to Information SecurityThe Repository Concept Changing Job Requirements Business Life Cycles

  9. Monte Carlo simulations of molecular gas flow: some applications in accelerator vacuum technology using a versatile personal computer program

    Energy Technology Data Exchange (ETDEWEB)

    Pace, A.; Poncet, A. (European Organization for Nuclear Research, Geneva (Switzerland))

    1990-01-01

    The Monte Carlo technique has been used extensively in the past to solve the problem of molecular flow through vacuum pipes or structures with specific boundary conditions for which analytical or even approximate solutions do not exist. Starting from a specific program written in 1975, the idea germinated over the years to produce handy, rather general, problem solving applications capable of running efficiently on modern microcomputers, mainly for ease of transportability and interactivity. Here, the latest version is described. The capabilities and limitations of these tools are presented through a few practical cases of conductance and pumping speed calculations pertinent to accelerator vacuum technology. (author).

  10. Accelerating hyper-spectral data processing on the multi-CPU and multi-GPU heterogeneous computing platform

    Science.gov (United States)

    Zhang, Lei; Gao, Jiao Bo; Hu, Yu; Wang, Ying Hui; Sun, Ke Feng; Cheng, Juan; Sun, Dan Dan; Li, Yu

    2017-02-01

    During the research of hyper-spectral imaging spectrometer, how to process the huge amount of image data is a difficult problem for all researchers. The amount of image data is about the order of magnitude of several hundred megabytes per second. The only way to solve this problem is parallel computing technology. With the development of multi-core CPU and GPU parallel computing on multi-core CPU or GPU is increasingly applied in large-scale data processing. In this paper, we propose a new parallel computing solution of hyper-spectral data processing which is based on the multi-CPU and multi-GPU heterogeneous computing platform. We use OpenMP technology to control multi-core CPU, we also use CUDA to schedule the parallel computing on multi-GPU. Experimental results show that the speed of hyper-spectral data processing on the multi-CPU and multi-GPU heterogeneous computing platform is apparently faster than the traditional serial algorithm which is run on single core CPU. Our research has significant meaning for the engineering application of the windowing Fourier transform imaging spectrometer.

  11. Strategizing NATOs Narratives

    DEFF Research Database (Denmark)

    Nissen, Thomas Elkjer

    2014-01-01

    , implementation structures, and capabilities can be used to inform the construction of strategic narratives in NATO. Using Libya as a case study he explains that the formulation and implementation of strategic narratives in NATO currently is a fragmented process that rarely takes into account the grand strategic...

  12. Understanding the effect of touchdown distance and ankle joint kinematics on sprint acceleration performance through computer simulation.

    Science.gov (United States)

    Bezodis, Neil Edward; Trewartha, Grant; Salo, Aki Ilkka Tapio

    2015-06-01

    This study determined the effects of simulated technique manipulations on early acceleration performance. A planar seven-segment angle-driven model was developed and quantitatively evaluated based on the agreement of its output to empirical data from an international-level male sprinter (100 m personal best = 10.28 s). The model was then applied to independently assess the effects of manipulating touchdown distance (horizontal distance between the foot and centre of mass) and range of ankle joint dorsiflexion during early stance on horizontal external power production during stance. The model matched the empirical data with a mean difference of 5.2%. When the foot was placed progressively further forward at touchdown, horizontal power production continually reduced. When the foot was placed further back, power production initially increased (a peak increase of 0.7% occurred at 0.02 m further back) but decreased as the foot continued to touchdown further back. When the range of dorsiflexion during early stance was reduced, exponential increases in performance were observed. Increasing negative touchdown distance directs the ground reaction force more horizontally; however, a limit to the associated performance benefit exists. Reducing dorsiflexion, which required achievable increases in the peak ankle plantar flexor moment, appears potentially beneficial for improving early acceleration performance.

  13. CARAT-GxG: CUDA-Accelerated Regression Analysis Toolkit for Large-Scale Gene-Gene Interaction with GPU Computing System.

    Science.gov (United States)

    Lee, Sungyoung; Kwon, Min-Seok; Park, Taesung

    2014-01-01

    In genome-wide association studies (GWAS), regression analysis has been most commonly used to establish an association between a phenotype and genetic variants, such as single nucleotide polymorphism (SNP). However, most applications of regression analysis have been restricted to the investigation of single marker because of the large computational burden. Thus, there have been limited applications of regression analysis to multiple SNPs, including gene-gene interaction (GGI) in large-scale GWAS data. In order to overcome this limitation, we propose CARAT-GxG, a GPU computing system-oriented toolkit, for performing regression analysis with GGI using CUDA (compute unified device architecture). Compared to other methods, CARAT-GxG achieved almost 700-fold execution speed and delivered highly reliable results through our GPU-specific optimization techniques. In addition, it was possible to achieve almost-linear speed acceleration with the application of a GPU computing system, which is implemented by the TORQUE Resource Manager. We expect that CARAT-GxG will enable large-scale regression analysis with GGI for GWAS data.

  14. Final Report on Institutional Computing Project s15_hilaserion, “Kinetic Modeling of Next-Generation High-Energy, High-Intensity Laser-Ion Accelerators as an Enabling Capability”

    Energy Technology Data Exchange (ETDEWEB)

    Albright, Brian James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Yin, Lin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Stark, David James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-06

    This proposal sought of order 1M core-hours of Institutional Computing time intended to enable computing by a new LANL Postdoc (David Stark) working under LDRD ER project 20160472ER (PI: Lin Yin) on laser-ion acceleration. The project was “off-cycle,” initiating in June of 2016 with a postdoc hire.

  15. Learning to think strategically.

    Science.gov (United States)

    1994-01-01

    Strategic thinking focuses on issues that directly affect the ability of a family planning program to attract and retain clients. This issue of "The Family Planning Manager" outlines the five steps of strategic thinking in family planning administration: 1) define the organization's mission and strategic goals; 2) identify opportunities for improving quality, expanding access, and increasing demand; 3) evaluate each option in terms of its compatibility with the organization's goals; 4) select an option; and 5) transform strategies into action. Also included in this issue is a 20-question test designed to permit readers to assess their "strategic thinking quotient" and a list of sample questions to guide a strategic analysis.

  16. LHCb GPU Acceleration Project

    CERN Document Server

    AUTHOR|(SzGeCERN)744808; Campora Perez, Daniel Hugo; Neufeld, Niko; Vilasis Cardona, Xavier

    2016-01-01

    The LHCb detector is due to be upgraded for processing high-luminosity collisions, which will increase the load on its computation infrastructure from 100 GB/s to 4 TB/s, encouraging us to look for new ways of accelerating the Online reconstruction. The Coprocessor Manager is our new framework for integrating LHCb’s existing computation pipelines with massively parallel algorithms running on GPUs and other accelerators. This paper describes the system and analyzes its performance.

  17. Golden-Finger and Back-Door: Two HW/SW Mechanisms for Accelerating Multicore Computer Systems

    Directory of Open Access Journals (Sweden)

    Slo-Li Chu

    2012-01-01

    Full Text Available Continuously requirements of high-performance computing make the computer system adopt more processors within a system to improve the parallelism and throughput. Although multiple processing cores are implemented in a computer system, the complicated hardware communication mechanism between processors will decrease the performance of overall system. Besides, the unsuitable process scheduling mechanism of conventional operating system can not fully utilize the computation power of additional processors. Accordingly, this paper provides two mechanisms to overcome the above challenges by using hardware and software mechanisms, respectively. In software aspect, we propose a tool, called Golden-Finger, to dynamically adjust the scheduling policy of the process scheduler in Linux. This software mechanism can improve the performance of the specified process by occupying a processor solely. In hardware aspect, we design an effective hardware mechanism, called Back-Door, to communicate two independent processors which can not be operated together, such as the dual PowerPC 405 cores in the Xilinx ML310 system. The experimental results reveal that the two mechanisms can obtain significant performance enhancements.

  18. Examining the Impact of Strategic Learning on Strategic Agility

    OpenAIRE

    Wael Mohamad Subhi Idris; Methaq Taher Kadhim AL-Rubaie

    2013-01-01

    The main aim of this study is to examining the Impact of Strategic Learning on Strategic Agility in Elba House Company in Jordan. The study adopts the demonstrative analytical approach to achieve their objectives. A total of (55) individual, (47) were respondents and answered the questionnaire distributed. The study finding that the strategic learning (strategic knowledge creation, strategic knowledge distribution, strategic knowledge interpretation and of strategic knowledge implementation) ...

  19. Strategic Analysis Overview

    Science.gov (United States)

    Cirillo, William M.; Earle, Kevin D.; Goodliff, Kandyce E.; Reeves, J. D.; Stromgren, Chel; Andraschko, Mark R.; Merrill, R. Gabe

    2008-01-01

    NASA s Constellation Program employs a strategic analysis methodology in providing an integrated analysis capability of Lunar exploration scenarios and to support strategic decision-making regarding those scenarios. The strategic analysis methodology integrates the assessment of the major contributors to strategic objective satisfaction performance, affordability, and risk and captures the linkages and feedbacks between all three components. Strategic analysis supports strategic decision making by senior management through comparable analysis of alternative strategies, provision of a consistent set of high level value metrics, and the enabling of cost-benefit analysis. The tools developed to implement the strategic analysis methodology are not element design and sizing tools. Rather, these models evaluate strategic performance using predefined elements, imported into a library from expert-driven design/sizing tools or expert analysis. Specific components of the strategic analysis tool set include scenario definition, requirements generation, mission manifesting, scenario lifecycle costing, crew time analysis, objective satisfaction benefit, risk analysis, and probabilistic evaluation. Results from all components of strategic analysis are evaluated a set of pre-defined figures of merit (FOMs). These FOMs capture the high-level strategic characteristics of all scenarios and facilitate direct comparison of options. The strategic analysis methodology that is described in this paper has previously been applied to the Space Shuttle and International Space Station Programs and is now being used to support the development of the baseline Constellation Program lunar architecture. This paper will present an overview of the strategic analysis methodology and will present sample results from the application of the strategic analysis methodology to the Constellation Program lunar architecture.

  20. Implementation of the networked computer based control system for PEFP 100MeV proton linear accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Song, Young Gi; Kwon, Hyeok Jung; Jang, Ji Ho; Cho, Yong Sub [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    The 100MeV Radio Frequency (RF) linac for the pulsed proton source is under development in KAERI. The main systems of the linac, such as the general timing control, the high power RF system, the control system of klystrons, the power supply system of magnets, the vacuum subsystem, and the cooling system, should be integrated into the control system of PEFP. Various subsystems units of the linac are to be made by other manufacturers with different standards. The technical integration will be based upon Experimental Physics and Industrial Control System (EPICS) software framework. The network attached computers, such as workstation, server, VME, and embedded system, will be applied as control devices. This paper is discussed on integration and implementation of the distributed control systems using networked computer systems.

  1. Strategic Shock: Managing the Strategic Gap

    Science.gov (United States)

    2013-03-01

    planning for strategic shocks. In Blindside, Francis Fukuyama covers much of the same intellectual territory with a specific focus on national security...Anticipating Strategic Surprise,” in Blindside, ed. Francis Fukuyama (Washington, D.C.: Brookings Institution Press, 2007), 93. 8 Ibid, 94. 9...so. Their theories and application are focused on the business environment rather than the national security environment. 30 Francis Fukuyama

  2. Vol. 34 - Optimization of quench protection heater performance in high-field accelerator magnets through computational and experimental analysis

    CERN Document Server

    Salmi, Tiina

    2016-01-01

    Superconducting accelerator magnets with increasingly hi gh magnetic fields are being designed to improve the performance of the Large Hadron Collider (LHC) at CERN. One of the technical challenges is the magnet quench p rotection, i.e., preventing damage in the case of an unexpected loss of superc onductivity and the heat generation related to that. Traditionally this is d one by disconnecting the magnet current supply and using so-called protection he aters. The heaters suppress the superconducting state across a large fraction of the winding thus leading to a uniform dissipation of the stored energy. Preli minary studies suggested that the high-field Nb 3 Sn magnets under development for the LHC luminosity upgrade (HiLumi) could not be reliably protected using the existing heaters. In this thesis work I analyzed in detail the present state-of-the-art protection heater technology, aiming to optimize its perfo rmance and evaluate the prospects in high-field magnet protection. The heater efficiency analyses ...

  3. Prediction of peak ground acceleration of Iran’s tectonic regions using a hybrid soft computing technique

    Institute of Scientific and Technical Information of China (English)

    Mostafa Gandomi; Mohsen Soltanpour; Mohammad R. Zolfaghari; Amir H. Gandomi

    2016-01-01

    A new model is derived to predict the peak ground acceleration (PGA) utilizing a hybrid method coupling artificial neural network (ANN) and simulated annealing (SA), called SA-ANN. The proposed model re-lates PGA to earthquake source to site distance, earthquake magnitude, average shear-wave velocity, faulting mechanisms, and focal depth. A database of strong ground-motion recordings of 36 earthquakes, which happened in Iran’s tectonic regions, is used to establish the model. For more validity verification, the SA-ANN model is employed to predict the PGA of a part of the database beyond the training data domain. The proposed SA-ANN model is compared with the simple ANN in addition to 10 well-known models proposed in the literature. The proposed model performance is superior to the single ANN and other existing attenuation models. The SA-ANN model is highly correlated to the actual records (R ¼ 0.835 and r ¼ 0.0908) and it is subsequently converted into a tractable design equation.

  4. Strategic Thoughts in Organizations

    Directory of Open Access Journals (Sweden)

    Juliane Inês Di Francesco Kich

    2014-08-01

    Full Text Available This paper aims to analyze a new way of thinking about the organizational strategies through a theoretical discussion of the term "strategic thoughts", and its development in organizations. To achieve this, a bibliographical research was conducted in order to go more deeply on the theme and reach a conceptual background, which can support further analysis. Among the results of this research, it is emphasized that the pragmatic characteristics of strategic planningappears to not have more space in the current organizational world, this tool needs to be interconnected to the strategic thought process to bring more effective results. In this regard, the challenged is present in how the organizations could develop a strategic planning that encourages strategic thoughts instead of undermine it, as well as, the development of tools that promote the ability to think strategically in all employees, regardless of the hierarchical levels.

  5. Complex Strategic Choices

    DEFF Research Database (Denmark)

    Leleur, Steen

    . Complex Strategic Choices provides clear principles and methods which can guide and support strategic decision making to face the many current challenges. By considering ways in which planning practices can be renewed and exploring the possibilities for acquiring awareness and tools to add value...... and students in the field of planning and decision analysis as well as practitioners dealing with strategic analysis and decision making. More broadly, Complex Strategic Choices acts as guide for professionals and students involved in complex planning tasks across several fields such as business...... to strategic decision making, Complex Strategic Choices presents a methodology which is further illustrated by a number of case studies and example applications. Dr. Techn. Steen Leleur has adapted previously established research based on feedback and input from various conferences, journals and students...

  6. Cultivating strategic thinking skills.

    Science.gov (United States)

    Shirey, Maria R

    2012-06-01

    This department highlights change management strategies that may be successful in strategically planning and executing organizational change initiatives. With the goal of presenting practical approaches helpful to nurse leaders advancing organizational change, content includes evidence-based projects, tools, and resources that mobilize and sustain organizational change initiatives. In this article, the author presents an overview of strategic leadership and offers approaches for cultivating strategic thinking skills.

  7. Strategic Marketing Planning Audit

    OpenAIRE

    Violeta Radulescu

    2012-01-01

    Market-oriented strategic planning is the process of defining and maintaining a viable relationship between objectives, training of personnel and resources of an organization, on the one hand and market conditions, on the other hand. Strategic marketing planning is an integral part of the strategic planning process of the organization. For successful marketing organization to obtain a competitive advantage, but also to measure the effectiveness of marketing actions the company is required to ...

  8. Strategic Marketing Planning Audit

    OpenAIRE

    Violeta Radulescu

    2012-01-01

    Market-oriented strategic planning is the process of defining and maintaining a viable relationship between objectives, training of personnel and resources of an organization, on the one hand and market conditions, on the other hand. Strategic marketing planning is an integral part of the strategic planning process of the organization. For successful marketing organization to obtain a competitive advantage, but also to measure the effectiveness of marketing actions the company is required to ...

  9. Strategic planning for neuroradiologists.

    Science.gov (United States)

    Berlin, Jonathan W; Lexa, Frank J

    2012-08-01

    Strategic planning is becoming essential to neuroradiology as the health care environment continues to emphasize cost efficiency, teamwork and collaboration. A strategic plan begins with a mission statement and vision of where the neuroradiology division would like to be in the near future. Formalized strategic planning frameworks, such as the strengths, weaknesses, opportunities and threats (SWOT), and the Balanced Scorecard frameworks, can help neuroradiology divisions determine their current position in the marketplace. Communication, delegation, and accountability in neuroradiology is essential in executing an effective strategic plan. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Strategic market segmentation

    National Research Council Canada - National Science Library

    Maričić Branko R; Đorđević Aleksandar

    2015-01-01

    ..., requires segmented approach to the market that appreciates differences in expectations and preferences of customers. One of significant activities in strategic planning of marketing activities is market segmentation...

  11. DCOI Strategic Plan

    Data.gov (United States)

    General Services Administration — Under the Data Center Optimization Initiative (DCOI), covered agencies are required to post DCOI Strategic Plans and updates to their FITARA milestones publicly on...

  12. Sandia Strategic Plan 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-01

    Sandia embarked on its first exercise in corporate strategic planning during the winter of 1989. The results of that effort were disseminated with the publication of Strategic Plan 1990. Four years later Sandia conducted their second major planning effort and published Strategic Plan 1994. Sandia`s 1994 planning effort linked very clearly to the Department of Energy`s first strategic plan, Fueling a Competitive Economy. It benefited as well from the leadership of Lockheed Martin Corporation, the management and operating contractor. Lockheed Martin`s corporate success is founded on visionary strategic planning and annual operational planning driven by customer requirements and technology opportunities. In 1996 Sandia conducted another major planning effort that resulted in the development of eight long-term Strategic Objectives. Strategic Plan 1997 differs from its predecessors in that the robust elements of previous efforts have been integrated into one comprehensive body. The changes implemented so far have helped establish a living strategic plan with a stronger business focus and with clear deployment throughout Sandia. The concept of a personal line of sight for all employees to this strategic plan and its objectives, goals, and annual milestones is becoming a reality.

  13. Sandia Strategic Plan 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-01

    Sandia embarked on its first exercise in corporate strategic planning during the winter of 1989. The results of that effort were disseminated with the publication of Strategic Plan 1990. Four years later Sandia conducted their second major planning effort and published Strategic Plan 1994. Sandia`s 1994 planning effort linked very clearly to the Department of Energy`s first strategic plan, Fueling a Competitive Economy. It benefited as well from the leadership of Lockheed Martin Corporation, the management and operating contractor. Lockheed Martin`s corporate success is founded on visionary strategic planning and annual operational planning driven by customer requirements and technology opportunities. In 1996 Sandia conducted another major planning effort that resulted in the development of eight long-term Strategic Objectives. Strategic Plan 1997 differs from its predecessors in that the robust elements of previous efforts have been integrated into one comprehensive body. The changes implemented so far have helped establish a living strategic plan with a stronger business focus and with clear deployment throughout Sandia. The concept of a personal line of sight for all employees to this strategic plan and its objectives, goals, and annual milestones is becoming a reality.

  14. Strategic Belief Management

    DEFF Research Database (Denmark)

    Foss, Nicolai Juul

    While (managerial) beliefs are central to many aspects of strategic organization, interactive beliefs are almost entirely neglected, save for some game theory treatments. In an increasingly connected and networked economy, firms confront coordination problems that arise because of network effects....... The capability to manage beliefs will increasingly be a strategic one, a key source of wealth creation, and a key research area for strategic organization scholars.......While (managerial) beliefs are central to many aspects of strategic organization, interactive beliefs are almost entirely neglected, save for some game theory treatments. In an increasingly connected and networked economy, firms confront coordination problems that arise because of network effects...

  15. GPU上计算流体力学的加速%Acceleration of Computational Fluid Dynamics Codes on GPU

    Institute of Scientific and Technical Information of China (English)

    董廷星; 李新亮; 李森; 迟学斌

    2011-01-01

    Computational Fluid Dynamic (CFD) codes based on incompressible Navier-Stokes, compressible Euler and compressible Navier-Stokes solvers are ported on NVIDIA GPU. As validation test, we have simulated a two-dimension cavity flow, Riemann problem and a transonic flow over a RAE2822 airfoil. Maximum 33.2x speedup is reported in our test. To maximum the GPU code performance, we also explore a number of GPU-specific optimization strategies. It demonstrates GPU code gives the expected results compared CPU code and experimental result and GPU computing has good compatibility and bright future.%本文将计算流体力学中的可压缩的纳维叶-斯托克斯(Navier-Stokes),不可压缩的Navier-Stokes和欧拉(Euler)方程移植到NVIDIA GPU上.模拟了3个测试例子,2维的黎曼问题,方腔流问题和RAE2822型的机翼绕流.相比于CPU,我们在GPU平台上最高得到了33.2倍的加速比.为了最大程度提高代码的性能,针对GPU平台上探索了几种优化策略.和CPU以及实验结果对比表明,利用计算流体力学在GPU平台上能够得到预想的结果,具有很好的应用前景.

  16. ARCHER, a New Monte Carlo Software Tool for Emerging Heterogeneous Computing Environments

    Science.gov (United States)

    Xu, X. George; Liu, Tianyu; Su, Lin; Du, Xining; Riblett, Matthew; Ji, Wei; Gu, Deyang; Carothers, Christopher D.; Shephard, Mark S.; Brown, Forrest B.; Kalra, Mannudeep K.; Liu, Bob

    2014-06-01

    The Monte Carlo radiation transport community faces a number of challenges associated with peta- and exa-scale computing systems that rely increasingly on heterogeneous architectures involving hardware accelerators such as GPUs. Existing Monte Carlo codes and methods must be strategically upgraded to meet emerging hardware and software needs. In this paper, we describe the development of a software, called ARCHER (Accelerated Radiation-transport Computations in Heterogeneous EnviRonments), which is designed as a versatile testbed for future Monte Carlo codes. Preliminary results from five projects in nuclear engineering and medical physics are presented.

  17. GPU-based implementation of an accelerated SR-NLUT based on N-point one-dimensional sub-principal fringe patterns in computer-generated holograms

    Directory of Open Access Journals (Sweden)

    Hee-Min Choi

    2015-06-01

    Full Text Available An accelerated spatial redundancy-based novel-look-up-table (A-SR-NLUT method based on a new concept of the N-point one-dimensional sub-principal fringe pattern (N-point1-D sub-PFP is implemented on a graphics processing unit (GPU for fast calculation of computer-generated holograms (CGHs of three-dimensional (3-Dobjects. Since the proposed method can generate the N-point two-dimensional (2-D PFPs for CGH calculation from the pre-stored N-point 1-D PFPs, the loading time of the N-point PFPs on the GPU can be dramatically reduced, which results in a great increase of the computational speed of the proposed method. Experimental results confirm that the average calculation time for one-object point has been reduced by 49.6% and 55.4% compared to those of the conventional 2-D SR-NLUT methods for each case of the 2-point and 3-point SR maps, respectively.

  18. How Strategic are Strategic Information Systems?

    Directory of Open Access Journals (Sweden)

    Alan Eardley

    1996-11-01

    Full Text Available There are many examples of information systems which are claimed to have created and sustained competitive advantage, allowed beneficial collaboration or simply ensured the continued survival of the organisations which used them These systems are often referred to as being 'strategic'. This paper argues that many of the examples of strategic information systems as reported in the literature are not sufficiently critical in determining whether the systems meet the generally accepted definition of the term 'strategic' - that of achieving sustainable competitive advantage. Eight of the information systems considered to be strategic are examined here from the standpoint of one widely-accepted 'competition' framework- Porter's model of industry competition . The framework is then used to question the linkage between the information systems and the mechanisms which are required for the enactment of strategic business objectives based on competition. Conclusions indicate that the systems are compatible with Porter's framework. Finally, some limitations of the framework are discussed and aspects of the systems which extend beyond the framework are highlighted

  19. 11. Strategic planning.

    Science.gov (United States)

    2014-05-01

    There are several types of planning processes and plans, including strategic, operational, tactical, and contingency. For this document, operational planning includes tactical planning. This chapter examines the strategic planning process and includes an introduction into disaster response plans. "A strategic plan is an outline of steps designed with the goals of the entire organisation as a whole in mind, rather than with the goals of specific divisions or departments". Strategic planning includes all measures taken to provide a broad picture of what must be achieved and in which order, including how to organise a system capable of achieving the overall goals. Strategic planning often is done pre-event, based on previous experience and expertise. The strategic planning for disasters converts needs into a strategic plan of action. Strategic plans detail the goals that must be achieved. The process of converting needs into plans has been deconstructed into its components and includes consideration of: (1) disaster response plans; (2) interventions underway or planned; (3) available resources; (4) current status vs. pre-event status; (5) history and experience of the planners; and (6) access to the affected population. These factors are tempered by the local: (a) geography; (b) climate; (c) culture; (d) safety; and (e) practicality. The planning process consumes resources (costs). All plans must be adapted to the actual conditions--things never happen exactly as planned.

  20. The Strategic Mediator

    DEFF Research Database (Denmark)

    Rossignoli, Cecilia; Carugati, Andrea; Mola, Lapo

    2009-01-01

    as an exclusive club, the belonging to which provides a strategic advantage. The technology brought forth by the marketplace participates in shaping the strategic demands of the participants which in turn request the marketplace to redesign its own strategy. Profiting from this unintended demand, the e...

  1. Manage "Human Capital" Strategically

    Science.gov (United States)

    Odden, Allan

    2011-01-01

    To strategically manage human capital in education means restructuring the entire human resource system so that schools not only recruit and retain smart and capable individuals, but also manage them in ways that support the strategic directions of the organization. These management practices must be aligned with a district's education improvement…

  2. Strategic environmental assessment

    DEFF Research Database (Denmark)

    Kørnøv, Lone

    1997-01-01

    The integration of environmental considerations into strategic decision making is recognized as a key to achieving sustainability. In the European Union a draft directive on Strategic Environmental Assessment (SEA) is currently being reviewed by the member states. The nature of the proposed SEA...

  3. Manage "Human Capital" Strategically

    Science.gov (United States)

    Odden, Allan

    2011-01-01

    To strategically manage human capital in education means restructuring the entire human resource system so that schools not only recruit and retain smart and capable individuals, but also manage them in ways that support the strategic directions of the organization. These management practices must be aligned with a district's education improvement…

  4. Improved Strategic Planning

    Science.gov (United States)

    1966-04-08

    to analyze the difficulties of providing improved strategic planning needed for more orderly progress in human affairs. This analysis consists of an...identification of important conceptual difficulties which stand in the way of improving strategic planning . This thesis concludes that it is necessary

  5. Strategic Leadership in Schools

    Science.gov (United States)

    Williams, Henry S.; Johnson, Teryl L.

    2013-01-01

    Strategic leadership is built upon traits and actions that encompass the successful execution of all leadership styles. In a world that is rapidly changing, strategic leadership in schools guides school leader through assuring constant improvement process by anticipating future trends and planning for them and noting that plans must be flexible to…

  6. Strategic HRD within companies

    NARCIS (Netherlands)

    Wognum, A.A.M.; Mulder, M.M.

    1999-01-01

    This article reports a preliminary survey that was conducted within the framework of the project on strategic human resource development (HRD), in which for various aspects of organisations the effects of strategic HRD are explored. The aim of the survey was to explore some conditions that are impor

  7. FY17 Strategic Themes.

    Energy Technology Data Exchange (ETDEWEB)

    Leland, Robert W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    I am pleased to present this summary of the FY17 Division 1000 Science and Technology Strategic Plan. As this plan represents a continuation of the work we started last year, the four strategic themes (Mission Engagement, Bold Outcomes, Collaborative Environment, and Safety Imperative) remain the same, along with many of the goals. You will see most of the changes in the actions listed for each goal: We completed some actions, modified others, and added a few new ones. As I’ve stated previously, this is not a strategy to be pursued in tension with the Laboratory strategic plan. The Division 1000 strategic plan is intended to chart our course as we strive to contribute our very best in service of the greater Laboratory strategy. I welcome your feedback and look forward to our dialogue about these strategic themes. Please join me as we move forward to implement the plan in the coming months.

  8. On strategic spatial planning

    Directory of Open Access Journals (Sweden)

    Tošić Branka

    2014-01-01

    Full Text Available The goal of this paper is to explain the origin and development of strategic spatial planning, to show complex features and highlight the differences and/or advantages over traditional, physical spatial planning. Strategic spatial planning is seen as one of approaches in legally defined planning documents, and throughout the display of properties of sectoral national strategies, as well as issues of strategic planning at the local level in Serbia. The strategic approach is clearly recognized at the national and sub-national level of spatial planning in European countries and in our country. It has been confirmed by the goals outlined in documents of the European Union and Serbia that promote the grounds of territorial cohesion and strategic integrated planning, emphasizing cooperation and the principles of sustainable spatial development. [Projekat Ministarstva nauke Republike Srbije, br. 176017

  9. Future accelerators (?)

    Energy Technology Data Exchange (ETDEWEB)

    John Womersley

    2003-08-21

    I describe the future accelerator facilities that are currently foreseen for electroweak scale physics, neutrino physics, and nuclear structure. I will explore the physics justification for these machines, and suggest how the case for future accelerators can be made.

  10. The strategic issues - structural elements of strategic management

    OpenAIRE

    Balta Corneliu

    2013-01-01

    The paper presents the most important concepts related to strategic management and the connection with strategic results as they are obtained after the main steps in strategic management are followed. The dynamics of the relationship between strategic issues and objectives is included

  11. THE STRATEGIC OPTIONS IN INVESTMENT PROJECTS VALUATION

    Directory of Open Access Journals (Sweden)

    VIOLETA SĂCUI

    2012-11-01

    Full Text Available The topic of real options applies the option valuation techniques to capital budgeting exercises in which a project is coupled with a put or call option. In many project valuation settings, the firm has one or more options to make strategic changes to the project during its life. These strategic options, which are known as real options, are typically ignored in standard discounted cash-flow analysis where a single expected present value is computed. This paper presents the types of real options that are met in economic activity.

  12. Strategic agility for nursing leadership.

    Science.gov (United States)

    Shirey, Maria R

    2015-06-01

    This department highlights change management strategies that may be successful in strategically planning and executing organizational change. In this article, the author discusses strategic agility as an important leadership competency and offers approaches for incorporating strategic agility in healthcare systems. A strategic agility checklist and infrastructure-building approach are presented.

  13. Accelerating Value Creation with Accelerators

    DEFF Research Database (Denmark)

    Jonsson, Eythor Ivar

    2015-01-01

    accelerator programs. Microsoft runs accelerators in seven different countries. Accelerators have grown out of the infancy stage and are now an accepted approach to develop new ventures based on cutting-edge technology like the internet of things, mobile technology, big data and virtual reality. It is also...... with the traditional audit and legal universes and industries are examples of emerging potentials both from a research and business point of view to exploit and explore further. The accelerator approach may therefore be an Idea Watch to consider, no matter which industry you are in, because in essence accelerators...

  14. Accelerating Value Creation with Accelerators

    DEFF Research Database (Denmark)

    Jonsson, Eythor Ivar

    2015-01-01

    Accelerators can help to accelerate value creation. Accelerators are short-term programs that have the objective of creating innovative and fast growing ventures. They have gained attraction as larger corporations like Microsoft, Barclays bank and Nordea bank have initiated and sponsored accelera......Accelerators can help to accelerate value creation. Accelerators are short-term programs that have the objective of creating innovative and fast growing ventures. They have gained attraction as larger corporations like Microsoft, Barclays bank and Nordea bank have initiated and sponsored...... an approach to facilitate implementation and realization of business ideas and is a lucrative approach to transform research into ventures and to revitalize regions and industries in transition. Investors have noticed that the accelerator approach is a way to increase the possibility of success by funnelling...

  15. Value oriented strategic marketing

    Directory of Open Access Journals (Sweden)

    Milisavljević Momčilo

    2013-01-01

    Full Text Available Changes in today's business environment require companies to orient to strategic marketing. The company accepting strategic marketing has a proactive approach and focus on continuous review and reappraisal of existing and seeking new strategic business areas. Difficulties in achieving target profit and growth require turning marketing from the dominant viewpoint of the tangible product to creating superior value and developing relationships with customers. Value orientation implies gaining competitive advantage through continuous research and understanding of what value represents to the consumers and discovering new ways to meet their required values. Strategic marketing investment requires that the investment in the creation of values should be regularly reviewed in order to ensure a focus on customers with high profit potential and environmental value. This increases customer satisfaction and retention and long-term return on investment of companies.

  16. Engineering Forum Strategic Plan

    Science.gov (United States)

    This Strategic Plan highlights the purpose, mission, goals, and objectives of the U.S. Environmental Protection Agency (EPA) Engineering Forum (EF). It sets forth the principles that guide the EF's decision-making, helps clarify the EF's priorities, and...

  17. Strategic Management: General Concepts

    Directory of Open Access Journals (Sweden)

    Shahram Tofighi

    2010-05-01

    Full Text Available In the era after substitution of long term planning by strategic planning, it was wished that the managers could act more successful in implementing their plans. The outcomes were far from the expected, there were minor improvements. In the organizations, a plenty of namely strategic plans has been developed during strategic planning processes, but most of these plans have been kept in the shelves, a few of them played their roles as guiding documents for the entire organization. What are the factors inducing such outcomes? Different scientists have offered a variety of justifications, according to their expe-riences."nThe first examined issue was misunderstanding stra-tegic planning by the managers and staff; it means the strategic planning process may be executed erroneously, and what they had expected from this process was not accurate. Substantially, strategic planning looks at the future and coming situations, and is designed to answer the questions which emerge in the future. Unfortunately, this critical and fundamental characteristic of strategic planning is obscured."nStrategic planning conveys the concept of drawing the future and developing a set of different probable scenarios along with defining a set of solutions in order to combat undesirable coming conditions for positioning the system or business. It helps organizations save themselves safe and maintain them successful. In other words, in strategic planning efforts we are seeking solutions fit for problems which will appear in the future for the conditions that will emerge in the future. Unfortunately, most of strategic plans which have been developed in the organizations lack this important and critical characteristic; I mean in most of them the developers had offered solutions in order to solve today's problems in the future! "nThe second issue which was considered by the scientists, was the task of ensuring the continuity of effectiveness of the planning, there was a

  18. Transactional vs. Strategic Sourcing

    National Research Council Canada - National Science Library

    Mark Ware

    2010-01-01

    .... It's frequently a situation of "here's the comparator that's needed. . .find it yesterday." Having to source a comparator in a very short amount of time can impose a transactional approach to sourcing, rather than a strategic one...

  19. Strategic planning in transition

    DEFF Research Database (Denmark)

    Olesen, Kristian; Richardson, Tim

    2012-01-01

    In this paper, we analyse how contested transitions in planning rationalities and spatial logics have shaped the processes and outputs of recent episodes of Danish ‘strategic spatial planning’. The practice of ‘strategic spatial planning’ in Denmark has undergone a concerted reorientation...... in the recent years as a consequence of an emerging neoliberal agenda promoting a growth-oriented planning approach emphasising a new spatial logic of growth centres in the major cities and urban regions. The analysis, of the three planning episodes, at different subnational scales, highlights how this new...... style of ‘strategic spatial planning’ with its associated spatial logics is continuously challenged by a persistent regulatory, top-down rationality of ‘strategic spatial planning’, rooted in spatial Keynesianism, which has long characterised the Danish approach. The findings reveal the emergence...

  20. Strategic Communication Institutionalized

    DEFF Research Database (Denmark)

    Kjeldsen, Anna Karina

    2013-01-01

    focuses on a discussion of the virus metaphor as an alternative to the widespread fashion metaphor for processes of institutionalization. The second part of the article provides empirical examples of the virus metaphor employed, examples that are drawn from a study of the institutionalization of strategic...... of institutionalization when strategic communication is not yet visible as organizational practice, and how can such detections provide explanation for the later outcome of the process? (2) How can studies of strategic communication benefit from an institutional perspective? How can the virus metaphor generate a deeper...... understanding of the mechanisms that interact from the time an organization is exposed to a new organizational idea such as strategic communication until it surfaces in the form of symptoms such as mission and vision statements, communication manuals and communication positions? The first part of the article...

  1. The IAU Strategic Plan

    Science.gov (United States)

    Miley, George

    2016-10-01

    I shall review the content of the IAU Strategic Plan (SP) to use astronomy as a tool for stimulating development globally during the decade 2010 - 2020. Considerable progress has been made in its implementation since the last General Assembly.

  2. Accelerated Metals Development by Computation

    Science.gov (United States)

    2008-02-01

    MS&T 2006, Cincinnati, Shade etal. MRS 2006 Fall Meeting, Boston, Shade etal. 2007 International Workshop on Small Scale Plasticity, Braunwald ... Braunwald , Switzerland “Characterization of Grain Growth Behavior in a Nickel-Base Alloy,” Materials Science & Technology 2006 conference (MS&T’06

  3. Hardware accelerated computer graphics algorithms

    OpenAIRE

    Rhodes, DT

    2008-01-01

    The advent of shaders in the latest generations of graphics hardware, which has made consumer level graphics hardware partially programmable, makes now an ideal time to investigate new graphical techniques and algorithms as well as attempting to improve upon existing ones. This work looks at areas of current interest within the graphics community such as Texture Filtering, Bump Mapping and Depth of Field simulation. These are all areas which have enjoyed much interest over the history of comp...

  4. Strategizing in multiple ways

    DEFF Research Database (Denmark)

    Larsen, Mette Vinther; Madsen, Charlotte Øland; Rasmussen, Jørgen Gulddahl

    2013-01-01

    Strategy processes are kinds of wayfaring where different actors interpret a formally defined strat-egy differently. In the everyday practice of organizations strategizing takes place in multiple ways through narratives and sensible actions. This forms a meshwork of polyphonic ways to enact one a...... based on this development paper is whether one can understand these diver-gent strategic wayfaring processes as constructive for organizations....

  5. 2015 Enterprise Strategic Vision

    Energy Technology Data Exchange (ETDEWEB)

    None

    2015-08-01

    This document aligns with the Department of Energy Strategic Plan for 2014-2018 and provides a framework for integrating our missions and direction for pursuing DOE’s strategic goals. The vision is a guide to advancing world-class science and engineering, supporting our people, modernizing our infrastructure, and developing a management culture that operates a safe and secure enterprise in an efficient manner.

  6. Working and strategic memory deficits in schizophrenia

    Science.gov (United States)

    Stone, M.; Gabrieli, J. D.; Stebbins, G. T.; Sullivan, E. V.

    1998-01-01

    Working memory and its contribution to performance on strategic memory tests in schizophrenia were studied. Patients (n = 18) and control participants (n = 15), all men, received tests of immediate memory (forward digit span), working memory (listening, computation, and backward digit span), and long-term strategic (free recall, temporal order, and self-ordered pointing) and nonstrategic (recognition) memory. Schizophrenia patients performed worse on all tests. Education, verbal intelligence, and immediate memory capacity did not account for deficits in working memory in schizophrenia patients. Reduced working memory capacity accounted for group differences in strategic memory but not in recognition memory. Working memory impairment may be central to the profile of impaired cognitive performance in schizophrenia and is consistent with hypothesized frontal lobe dysfunction associated with this disease. Additional medial-temporal dysfunction may account for the recognition memory deficit.

  7. Strategic market segmentation

    Directory of Open Access Journals (Sweden)

    Maričić Branko R.

    2015-01-01

    Full Text Available Strategic planning of marketing activities is the basis of business success in modern business environment. Customers are not homogenous in their preferences and expectations. Formulating an adequate marketing strategy, focused on realization of company's strategic objectives, requires segmented approach to the market that appreciates differences in expectations and preferences of customers. One of significant activities in strategic planning of marketing activities is market segmentation. Strategic planning imposes a need to plan marketing activities according to strategically important segments on the long term basis. At the same time, there is a need to revise and adapt marketing activities on the short term basis. There are number of criteria based on which market segmentation is performed. The paper will consider effectiveness and efficiency of different market segmentation criteria based on empirical research of customer expectations and preferences. The analysis will include traditional criteria and criteria based on behavioral model. The research implications will be analyzed from the perspective of selection of the most adequate market segmentation criteria in strategic planning of marketing activities.

  8. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  9. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  11. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  12. ‘DEOS CHAMP-01C 70’: a model of the Earth’s gravity field computed from accelerations of the CHAMP satellite

    NARCIS (Netherlands)

    Ditmar, P.G.; Kuznetsov, V.; Van Eck van der Sluis, A.A.; Schrama, E.; Klees, R.

    2005-01-01

    Performance of a recently proposed technique for gravity field modeling has been assessed with data from the CHAMP satellite. The modeling technique is a variant of the acceleration approach. It makes use of the satellite accelerations that are derived from the kinematic orbit with the 3-point

  13. Restriction of the use of hazardous substances (RoHS in the personal computer segment: analysis of the strategic adoption by the manufacturers settled in Brazil

    Directory of Open Access Journals (Sweden)

    Ademir Brescansin

    2015-09-01

    Full Text Available The enactment of the RoHS Directive (Restriction of Hazardous Substances in 2003, limiting the use of certain hazardous substances in electronic equipment has forced companies to adjust their products to comply with this legislation. Even in the absence of similar legislation in Brazil, manufacturers of personal computers which are located in this country have been seen to adopt RoHS for products sold in the domestic market and abroad. The purpose of this study is to analyze whether these manufacturers have really adopted RoHS, focusing on their motivations, concerns, and benefits. This is an exploratory study based on literature review and interviews with HP, Dell, Sony, Lenovo, Samsung, LG, Itautec, and Positivo, using summative content analysis. The results showed that initially, global companies adopted RoHS to market products in Europe, and later expanded this practice to all products. Brazilian companies, however, adopted RoHS to participate in the government’s sustainable procurement bidding processes. It is expected that this study can assist manufacturers in developing strategies for reducing or eliminating hazardous substances in their products and processes, as well as help the government to formulate public policies on reducing risks of environmental contamination.

  14. Strategic business planning and development for competitive health care systems.

    Science.gov (United States)

    Nauert, Roger C

    2005-01-01

    The health care industry has undergone enormous evolutionary changes in recent years. Competitive transitions have accelerated the compelling need for aggressive strategic business planning and dynamic system development. Success is driven by organizational commitments to farsighted market analyses, timely action, and effective management.

  15. Developing a framework for predicting upper extremity muscle activities, postures, velocities, and accelerations during computer use: the effect of keyboard use, mouse use, and individual factors on physical exposures.

    Science.gov (United States)

    Bruno Garza, Jennifer L; Catalano, Paul J; Katz, Jeffrey N; Huysmans, Maaike A; Dennerlein, Jack T

    2012-01-01

    Prediction models were developed based on keyboard and mouse use in combination with individual factors that could be used to predict median upper extremity muscle activities, postures, velocities, and accelerations experienced during computer use. In the laboratory, 25 participants performed five simulated computer trials with different amounts of keyboard and mouse use ranging from a highly keyboard-intensive trial to a highly mouse-intensive trial. During each trial, muscle activity and postures of the shoulder and wrist and velocities and accelerations of the wrists, along with percentage keyboard and mouse use, were measured. Four individual factors (hand length, shoulder width, age, and gender) were also measured on the day of data collection. Percentage keyboard and mouse use explained a large amount of the variability in wrist velocities and accelerations. Although hand length, shoulder width, and age were each significant predictors of at least one median muscle activity, posture, velocity, or acceleration exposure, these individual factors explained very little variability in addition to percentage keyboard and mouse use in any of the physical exposures investigated. The amounts of variability explained for models predicting median wrist velocities and accelerations ranged from 75 to 84% but were much lower for median muscle activities and postures (0-50%). RMS errors ranged between 8 to 13% of the range observed. While the predictions for wrist velocities and accelerations may be able to be used to improve exposure assessment for future epidemiologic studies, more research is needed to identify other factors that may improve the predictions for muscle activities and postures.

  16. Guidelines for strategic planning

    Energy Technology Data Exchange (ETDEWEB)

    1991-07-01

    Strategic planning needs to be done as one of the integral steps in fulfilling our overall Departmental mission. The role of strategic planning is to assure that the longer term destinations, goals, and objectives which the programs and activities of the Department are striving towards are the best we can envision today so that our courses can then be set to move in those directions. Strategic planning will assist the Secretary, Deputy Secretary, and Under Secretary in setting the long-term directions and policies for the Department and in making final decisions on near-term priorities and resource allocations. It will assist program developers and implementors by providing the necessary guidance for multi-year program plans and budgets. It is one of the essential steps in the secretary's Strategic Planning Initiative. The operational planning most of us are so familiar with deals with how to get things done and with the resources needed (people, money, facilities, time) to carry out tasks. Operating plans like budgets, capital line item projects, R D budgets, project proposals, etc., are vital to the mission of the Department. They deal, however, with how to carry out programs to achieve some objective or budget assumption. Strategic planning deals with the prior question of what it is that should be attempted. It deals with what objectives the many programs and activities of the Department of Department should be striving toward. The purpose of this document is to provide guidance to those organizations and personnel starting the process for the first time as well as those who have prepared strategic plans in the past and now wish to review and update them. This guideline should not be constructed as a rigid, restrictive or confining rulebook. Each organization is encouraged to develop such enhancements as they think may be useful in their planning. The steps outlined in this document represent a very simplified approach to strategic planning. 9 refs.

  17. RECIRCULATING ACCELERATION

    Energy Technology Data Exchange (ETDEWEB)

    BERG,J.S.; GARREN,A.A.; JOHNSTONE,C.

    2000-04-07

    This paper compares various types of recirculating accelerators, outlining the advantages and disadvantages of various approaches. The accelerators are characterized according to the types of arcs they use: whether there is a single arc for the entire recirculator or there are multiple arcs, and whether the arc(s) are isochronous or non-isochronous.

  18. LIBO accelerates

    CERN Multimedia

    2002-01-01

    The prototype module of LIBO, a linear accelerator project designed for cancer therapy, has passed its first proton-beam acceleration test. In parallel a new version - LIBO-30 - is being developed, which promises to open up even more interesting avenues.

  19. Accelerating Inspire

    CERN Document Server

    AUTHOR|(CDS)2266999

    2017-01-01

    CERN has been involved in the dissemination of scientific results since its early days and has continuously updated the distribution channels. Currently, Inspire hosts catalogues of articles, authors, institutions, conferences, jobs, experiments, journals and more. Successful orientation among this amount of data requires comprehensive linking between the content. Inspire has lacked a system for linking experiments and articles together based on which accelerator they were conducted at. The purpose of this project has been to create such a system. Records for 156 accelerators were created and all 2913 experiments on Inspire were given corresponding MARC tags. Records of 18404 accelerator physics related bibliographic entries were also tagged with corresponding accelerator tags. Finally, as a part of the endeavour to broaden CERN's presence on Wikipedia, existing Wikipedia articles of accelerators were updated with short descriptions and links to Inspire. In total, 86 Wikipedia articles were updated. This repo...

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  1. Computational investigation of 99Mo, 89Sr, and 131I production rates in a subcritical UO2(NO32 aqueous solution reactor driven by a 30-MeV proton accelerator

    Directory of Open Access Journals (Sweden)

    Z. Gholamzadeh

    2015-12-01

    Full Text Available The use of subcritical aqueous homogenous reactors driven by accelerators presents an attractive alternative for producing 99Mo. In this method, the medical isotope production system itself is used to extract 99Mo or other radioisotopes so that there is no need to irradiate common targets. In addition, it can operate at much lower power compared to a traditional reactor to produce the same amount of 99Mo by irradiating targets. In this study, the neutronic performance and 99Mo, 89Sr, and 131I production capacity of a subcritical aqueous homogenous reactor fueled with low-enriched uranyl nitrate was evaluated using the MCNPX code. A proton accelerator with a maximum 30-MeV accelerating power was used to run the subcritical core. The computational results indicate a good potential for the modeled system to produce the radioisotopes under completely safe conditions because of the high negative reactivity coefficients of the modeled core. The results show that application of an optimized beam window material can increase the fission power of the aqueous nitrate fuel up to 80%. This accelerator-based procedure using low enriched uranium nitrate fuel to produce radioisotopes presents a potentially competitive alternative in comparison with the reactor-based or other accelerator-based methods. This system produces ∼1,500 Ci/wk (∼325 6-day Ci of 99Mo at the end of a cycle.

  2. Computer

    CERN Document Server

    Atkinson, Paul

    2011-01-01

    The pixelated rectangle we spend most of our day staring at in silence is not the television as many long feared, but the computer-the ubiquitous portal of work and personal lives. At this point, the computer is almost so common we don't notice it in our view. It's difficult to envision that not that long ago it was a gigantic, room-sized structure only to be accessed by a few inspiring as much awe and respect as fear and mystery. Now that the machine has decreased in size and increased in popular use, the computer has become a prosaic appliance, little-more noted than a toaster. These dramati

  3. FY16 Strategic Themes.

    Energy Technology Data Exchange (ETDEWEB)

    Leland, Robert W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    I am pleased to present this summary of the Division 1000 Science and Technology Strategic Plan. This plan was created with considerable participation from all levels of management in Division 1000, and is intended to chart our course as we strive to contribute our very best in service of the greater Laboratory strategy. The plan is characterized by four strategic themes: Mission Engagement, Bold Outcomes, Collaborative Environment, and the Safety Imperative. Each theme is accompanied by a brief vision statement, several goals, and planned actions to support those goals throughout FY16. I want to be clear that this is not a strategy to be pursued in tension with the Laboratory strategic plan. Rather, it is intended to describe “how” we intend to show up for the “what” described in Sandia’s Strategic Plan. I welcome your feedback and look forward to our dialogue about these strategic themes. Please join me as we move forward to implement the plan in the coming year.

  4. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  5. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  6. Thinking strategically about capitation.

    Science.gov (United States)

    Boland, P

    1997-05-01

    All managed care stakeholders--health plan members, employers, providers, community organizations, and government entitites--share a common interest in reducing healthcare costs while improving the quality of care health plan members receive. Although capitation is a usually thought of primarily as a payment mechanism, it can be a powerful tool providers and health plans can use to accomplish these strategic objectives and others, such as restoring and maintaining the health of plan members or improving a community's health status. For capitation to work effectively as a strategic tool, its use must be tied to a corporate agenda of partnering with stakeholders to achieve broader strategic goals. Health plans and providers must develop a partnership strategy in which each stakeholder has well-defined roles and responsibilities. The capitation structure must reinforce interdependence, shift focus from meeting organizational needs to meeting customer needs, and develop risk-driven care strategies.

  7. Strategic CSR in Afghanistan

    DEFF Research Database (Denmark)

    Azizi, Sameer

    CSR is a rising phenomena in Afghanistan – but why are firms concerned about CSR in a least-developed context such as Afghanistan, and what are the strategic benefits? This paper is one of the first to explore these CSR issues in a least-developed country. It does so by focusing on CSR...... in the Afghan telecommunication sector and in particular on ‘Roshan’ as a case company. The findings of this paper are two-folded. First, it provides an overview of the CSR practices in the telecommunication sector in Afghanistan. Second, it focuses on one case and explains whether Roshan can gain strategic...... advantages through CSR in Afghanistan, and if so which and how these strategic benefits are gained. The paper shows that the developmental challenges of Afghanistan are the key explanations for why companies engage in CSR. Roshan has engaged in proactive CSR to overcome the contextual barriers for growth...

  8. Tourism and Strategic Planning

    DEFF Research Database (Denmark)

    Pasgaard, Jens Christian

    2012-01-01

    the potential of ‘the extraordinary’ tourism-dominated space. As highlighted in the introduction, this report does not present any systematic analysis of strategic planning processes; neither does it provide any unequivocal conclusions. Rather, the report presents a collection of so-called ‘detours......The main purpose of this report is to explore and unfold the complexity of the tourism phenomenon in order to qualify the general discussion of tourism-related planning challenges. Throughout the report I aim to demonstrate the strategic potential of tourism in a wider sense and more specifically......’ – a collection of theoretical discussions and case studies with the aim to inspire future strategic planning. Due to the complexity and heterogeneity of the phenomenon I use a non-linear and non-chronological report format with the ambition to create a new type of overview. In this regard the report is intended...

  9. Strategic Self-Ignorance

    DEFF Research Database (Denmark)

    Thunström, Linda; Nordström, Leif Jonas; Shogren, Jason F.

    We examine strategic self-ignorance—the use of ignorance as an excuse to overindulge in pleasurable activities that may be harmful to one’s future self. Our model shows that guilt aversion provides a behavioral rationale for present-biased agents to avoid information about negative future impacts...... of such activities. We then confront our model with data from an experiment using prepared, restaurant-style meals — a good that is transparent in immediate pleasure (taste) but non-transparent in future harm (calories). Our results support the notion that strategic self-ignorance matters: nearly three of five...... subjects (58 percent) chose to ignore free information on calorie content, leading at-risk subjects to consume significantly more calories. We also find evidence consistent with our model on the determinants of strategic self-ignorance....

  10. Strategic self-ignorance

    DEFF Research Database (Denmark)

    Thunström, Linda; Nordström, Leif Jonas; Shogren, Jason F.

    2016-01-01

    We examine strategic self-ignorance—the use of ignorance as an excuse to over-indulge in pleasurable activities that may be harmful to one’s future self. Our model shows that guilt aversion provides a behavioral rationale for present-biased agents to avoid information about negative future impacts...... of such activities. We then confront our model with data from an experiment using prepared, restaurant-style meals—a good that is transparent in immediate pleasure (taste) but non-transparent in future harm (calories). Our results support the notion that strategic self-ignorance matters: nearly three of five...... subjects (58%) chose to ignore free information on calorie content, leading at-risk subjects to consume significantly more calories. We also find evidence consistent with our model on the determinants of strategic self-ignorance....

  11. Accelerating QDP++ using GPUs

    CERN Document Server

    Winter, Frank

    2011-01-01

    Graphic Processing Units (GPUs) are getting increasingly important as target architectures in scientific High Performance Computing (HPC). NVIDIA established CUDA as a parallel computing architecture controlling and making use of the compute power of GPUs. CUDA provides sufficient support for C++ language elements to enable the Expression Template (ET) technique in the device memory domain. QDP++ is a C++ vector class library suited for quantum field theory which provides vector data types and expressions and forms the basis of the lattice QCD software suite Chroma. In this work accelerating QDP++ expression evaluation to a GPU was successfully implemented leveraging the ET technique and using Just-In-Time (JIT) compilation. The Portable Expression Template Engine (PETE) and the C API for CUDA kernel arguments were used to build the bridge between host and device memory domains. This provides the possibility to accelerate Chroma routines to a GPU which are typically not subject to special optimisation. As an ...

  12. Mapping strategic diversity: strategic thinking from a variety of perspectives

    NARCIS (Netherlands)

    Jacobs, D.

    2010-01-01

    In his influential work, Strategy Safari, Henry Mintzberg and his colleagues presented ten schools of strategic thought. In this impressive book, Dany Jacobs demonstrates that the real world of strategic management is much wider and richer. In Mapping Strategic Diversity, Jacobs distinguishes betwee

  13. Mapping strategic diversity: strategic thinking from a variety of perspectives

    NARCIS (Netherlands)

    Jacobs, D.

    2010-01-01

    In his influential work, Strategy Safari, Henry Mintzberg and his colleagues presented ten schools of strategic thought. In this impressive book, Dany Jacobs demonstrates that the real world of strategic management is much wider and richer. In Mapping Strategic Diversity, Jacobs distinguishes

  14. The Strategic Attitude: Integrating Strategic Planning into Daily University Worklife

    Science.gov (United States)

    Dickmeyer, Nathan

    2004-01-01

    Chief financial officers in today's universities are so busy with the challenges of day-to-day management that strategic thinking often takes a back seat. Planning for strategic change can go a long way toward streamlining the very daily tasks that obscure the "big picture." Learning how to integrate strategic thinking into day-to-day management…

  15. Strategic environmental assessment

    DEFF Research Database (Denmark)

    Kørnøv, Lone

    1997-01-01

    The integration of environmental considerations into strategic decision making is recognized as a key to achieving sustainability. In the European Union a draft directive on Strategic Environmental Assessment (SEA) is currently being reviewed by the member states. The nature of the proposed SEA...... that the SEA directive will influence the decision-making process positively and will help to promote improved environmental decisions. However, the guidelines for public participation are not sufficient and the democratic element is strongly limited. On the basis of these findings, recommendations relating...

  16. Vacuum Brazing of Accelerator Components

    Science.gov (United States)

    Singh, Rajvir; Pant, K. K.; Lal, Shankar; Yadav, D. P.; Garg, S. R.; Raghuvanshi, V. K.; Mundra, G.

    2012-11-01

    Commonly used materials for accelerator components are those which are vacuum compatible and thermally conductive. Stainless steel, aluminum and copper are common among them. Stainless steel is a poor heat conductor and not very common in use where good thermal conductivity is required. Aluminum and copper and their alloys meet the above requirements and are frequently used for the above purpose. The accelerator components made of aluminum and its alloys using welding process have become a common practice now a days. It is mandatory to use copper and its other grades in RF devices required for accelerators. Beam line and Front End components of the accelerators are fabricated from stainless steel and OFHC copper. Fabrication of components made of copper using welding process is very difficult and in most of the cases it is impossible. Fabrication and joining in such cases is possible using brazing process especially under vacuum and inert gas atmosphere. Several accelerator components have been vacuum brazed for Indus projects at Raja Ramanna Centre for Advanced Technology (RRCAT), Indore using vacuum brazing facility available at RRCAT, Indore. This paper presents details regarding development of the above mentioned high value and strategic components/assemblies. It will include basics required for vacuum brazing, details of vacuum brazing facility, joint design, fixturing of the jobs, selection of filler alloys, optimization of brazing parameters so as to obtain high quality brazed joints, brief description of vacuum brazed accelerator components etc.

  17. Macrofoundation for Strategic Technology Management

    DEFF Research Database (Denmark)

    Pedersen, Jørgen Lindgaard

    1995-01-01

    Neoclassical mainstream economics has no perspective on strategic technology management issues. Market failure economics (externalities etc.)can be of some use to analyze problems of relevance in strategic management problems with technology as a part. Environment, inequality and democratic...

  18. Macrofoundation for Strategic Technology Management

    DEFF Research Database (Denmark)

    Pedersen, Jørgen Lindgaard

    1995-01-01

    Neoclassical mainstream economics has no perspective on strategic technology management issues. Market failure economics (externalities etc.)can be of some use to analyze problems of relevance in strategic management problems with technology as a part. Environment, inequality and democratic...

  19. Horizontal Accelerator

    Data.gov (United States)

    Federal Laboratory Consortium — The Horizontal Accelerator (HA) Facility is a versatile research tool available for use on projects requiring simulation of the crash environment. The HA Facility is...

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  1. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  2. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  3. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  4. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  6. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  7. Future accelerators

    CERN Document Server

    Hübner, K

    1999-01-01

    An overview of the various schemes for electron-positron linear colliders is given and the status of the development of key components and the various test facilities is given. The present studies of muon-muon colliders and very large hadron colliders are summarized including the plans for component development and tests. Accelerator research and development to achieve highest gradients in linear accelerators is outlined. (44 refs).

  8. A Handbook for Strategic Planning

    Science.gov (United States)

    1994-01-01

    This handbook was written for Department of the Navy (DON) commanding officers, TQL coordinators, and strategic planning facilitators in response to...questions about the strategic planning process and how it should be conducted within the DON. It is not intended to teach the intricacies of strategic ... planning , but is provided to answer process questions. While every question cannot be anticipated, the handbook details one way to do strategic

  9. Strategic Alignment of Business Intelligence

    OpenAIRE

    Cederberg, Niclas

    2010-01-01

    This thesis is about the concept of strategic alignment of business intelligence. It is based on a theoretical foundation that is used to define and explain business intelligence, data warehousing and strategic alignment. By combining a number of different methods for strategic alignment a framework for alignment of business intelligence is suggested. This framework addresses all different aspects of business intelligence identified as relevant for strategic alignment of business intelligence...

  10. Strategic Planning and Financial Management

    Science.gov (United States)

    Conneely, James F.

    2010-01-01

    Strong financial management is a strategy for strategic planning success in student affairs. It is crucial that student affairs professionals understand the necessity of linking their strategic planning with their financial management processes. An effective strategic planner needs strong financial management skills to implement the plan over…

  11. The Possibilities of Strategic Finance

    Science.gov (United States)

    Chaffee, Ellen

    2010-01-01

    Strategic finance is aligning financial decisions--regarding revenues, creating and maintaining institutional assets, and using those assets--with the institution's mission and strategic plan. The concept known as "strategic finance" increasingly is being seen as a useful perspective for helping boards and presidents develop a sustainable…

  12. Being Strategic in HE Management

    Science.gov (United States)

    West, Andrew

    2008-01-01

    The call to be strategic--and with it the concept of strategic management--can bring to mind a wide range of definitions, and there is now a huge array of academic literature supporting the different schools of thought. At a basic level, however, strategic thinking is probably most simply about focusing on the whole, rather than the part. In…

  13. The Possibilities of Strategic Finance

    Science.gov (United States)

    Chaffee, Ellen

    2010-01-01

    Strategic finance is aligning financial decisions--regarding revenues, creating and maintaining institutional assets, and using those assets--with the institution's mission and strategic plan. The concept known as "strategic finance" increasingly is being seen as a useful perspective for helping boards and presidents develop a sustainable…

  14. Being Strategic in HE Management

    Science.gov (United States)

    West, Andrew

    2008-01-01

    The call to be strategic--and with it the concept of strategic management--can bring to mind a wide range of definitions, and there is now a huge array of academic literature supporting the different schools of thought. At a basic level, however, strategic thinking is probably most simply about focusing on the whole, rather than the part. In…

  15. Strategic market planning for hospitals.

    Science.gov (United States)

    Zallocco, R L; Joseph, W B; Doremus, H

    1984-01-01

    The application of strategic market planning to hospital management is discussed, along with features of the strategic marketing management process. A portfolio analysis tool, the McKinsey/G.E. Business Screen, is presented and, using a large urban hospital as an example, discussed in detail relative to hospital administration. Finally, strategic implications of the portfolio analysis are examined.

  16. Strategic Management of Large Projects

    Institute of Scientific and Technical Information of China (English)

    WangYingluo; LiuYi; LiYuan

    2004-01-01

    The strategic management of large projects is both theoretically and practically important. Some scholars have advanced flexible strategy theory in China. The difference of strategic flexibility and flexible strategy is pointed out. The supporting system and characteristics of flexible strategy are analyzed. The changes of flexible strategy and integration of strategic management are discussed.

  17. Strategic Planning and Financial Management

    Science.gov (United States)

    Conneely, James F.

    2010-01-01

    Strong financial management is a strategy for strategic planning success in student affairs. It is crucial that student affairs professionals understand the necessity of linking their strategic planning with their financial management processes. An effective strategic planner needs strong financial management skills to implement the plan over…

  18. Strategic Targeted Advertising

    NARCIS (Netherlands)

    A. Galeotti; J.L. Moraga-Gonzalez (José Luis)

    2003-01-01

    textabstractWe present a strategic game of pricing and targeted-advertising. Firms can simultaneously target price advertisements to different groups of customers, or to the entire market. Pure strategy equilibria do not exist and thus market segmentation cannot occur surely. Equilibria exhibit rand

  19. Strategic Information Systems Planning.

    Science.gov (United States)

    Rowley, Jennifer

    1995-01-01

    Strategic Information Systems Planning (SISP) is the process of establishing a program for implementation and use of information systems in ways that will optimize effectiveness of information resources and use them to support the objectives of the organization. Basic steps in SISP methodology are outlined. (JKP)

  20. The strategic research positioning:

    DEFF Research Database (Denmark)

    Viala, Eva Silberschmidt

    to provide new insights into ‘immigrant’ parents’ perspective on home/school partnership in Denmark. The majority of the immigrant parents came from non-Western countries, and they had already been ‘labelled’ difficult in terms of home/school partnership. This calls for what I call ‘strategic research...

  1. The Strategic Revolution.

    Science.gov (United States)

    Gardner, Andy

    2016-09-08

    On the 40(th) anniversary of the publication of Richard Dawkins's The Selfish Gene, we explore the origins of cynical, strategic thinking in evolutionary biology, investigate how this illuminated the sexual and social lives of animals, and assess Dawkins's suggestion that evolution is best understood by taking the gene's-eye view. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Towards Strategic Language Learning

    NARCIS (Netherlands)

    Oostdam, R.; Rijlaarsdam, Gert

    1995-01-01

    Towards Strategic Language Learning is the result of extensive research in the relationship between mother tongue education and foreign language learning. As language skills that are taught during native language lessons are applied in foreign language performance as well, it is vital that curricula

  3. Strategic Leadership Development Model

    Science.gov (United States)

    2012-03-19

    system in vogue is relatively streamlined and ensures better grooming of potential strategic leaders at varying stages of their career; however, it...Washington, D.C.: National Defence University Press,1997). 20 Op. Cit. 21 Howard Gardener, Leading Minds; An Anatomy of Leadership (Great Britain

  4. Strategic Tutor Monitoring.

    Science.gov (United States)

    Chee-kwong, Kenneth Chao

    1996-01-01

    Discusses effective tutor monitoring strategies based on experiences at the Open Learning Institute of Hong Kong. Highlights include key performance and strategic control points; situational factors, including tutor expectations and relevant culture; Theory X versus Theory Y leadership theories; and monitoring relationships with tutors. (LRW)

  5. Strategic Targeted Advertising

    NARCIS (Netherlands)

    A. Galeotti; J.L. Moraga-Gonzalez (José Luis)

    2003-01-01

    textabstractWe present a strategic game of pricing and targeted-advertising. Firms can simultaneously target price advertisements to different groups of customers, or to the entire market. Pure strategy equilibria do not exist and thus market segmentation cannot occur surely. Equilibria exhibit rand

  6. What is strategic management?

    Science.gov (United States)

    Jasper, Melanie; Crossan, Frank

    2012-10-01

    To discuss the theoretical concept of strategic management and explore its relevance for healthcare organisations and nursing management. Despite being a relatively new approach, the growth of strategic management within organisations has been consistently and increasingly promoted. However, comprehensive definitions are scarce and commonalities of interpretation are limited. This paper presents an exploratory discussion of the construct of strategic management, drawing on the literature and questioning its relevance within health-care organisations. Literature relating to strategic management across a number of fields was accessed, drawing primarily on meta-studies within management literature, to identify key concepts and attempt to present a consistent definition. The concept within health care is explored in relation to nursing management. Inconsistency in definitions and utilisation of key concepts within this management approach results in the term being loosely applied in health-care organisations without recourse to foundational principles and a deep understanding of the approach as a theory as opposed to an applied term. Nurse managers are increasingly asked to adopt the 'next-best-thing' in managerial theories, yet caution needs to be taken in nurses agreeing to use systems that lack an evidence base in terms of both efficacy and relevance of context. © 2012 Blackwell Publishing Ltd.

  7. Strategic Tutor Monitoring.

    Science.gov (United States)

    Chee-kwong, Kenneth Chao

    1996-01-01

    Discusses effective tutor monitoring strategies based on experiences at the Open Learning Institute of Hong Kong. Highlights include key performance and strategic control points; situational factors, including tutor expectations and relevant culture; Theory X versus Theory Y leadership theories; and monitoring relationships with tutors. (LRW)

  8. Adaptive Airport Strategic Planning

    NARCIS (Netherlands)

    Kwakkel, J.H.; Walker, W.E.; Marchau, V.A.W.J.

    2010-01-01

    Airport Strategic Planning (ASP) focuses on the development of plans for the long-term development of an airport. The dominant approach for ASP is Airport Master Planning (AMP). The goal of AMP is to provide a detailed blueprint for how the airport should look in the future, and how it can get there

  9. The Strategic Resources

    Institute of Scientific and Technical Information of China (English)

    Liu Zhiyang

    2011-01-01

    “The reason I pay close attention to and am very concerned about standards is that fiom my point of view standards are very important resources or even strategic resources.The meteorological work is highly professional and requires standards in every aspect.With disjoint standards,businesses,services and scientific researches cannot be properly done.”

  10. Towards Strategic Language Learning

    NARCIS (Netherlands)

    Oostdam, R.; Rijlaarsdam, Gert

    1995-01-01

    Towards Strategic Language Learning is the result of extensive research in the relationship between mother tongue education and foreign language learning. As language skills that are taught during native language lessons are applied in foreign language performance as well, it is vital that curricula

  11. Strategic Sample Selection

    DEFF Research Database (Denmark)

    Di Tillio, Alfredo; Ottaviani, Marco; Sørensen, Peter Norman

    2017-01-01

    is double logconvex, as with normal noise. The results are applied to the analysis of strategic sample selection by a biased researcher and extended to the case of uncertain and unanticipated selection. Our theoretical analysis offers applied research a new angle on the problem of selection in empirical...

  12. Trust in Strategic Alliances

    DEFF Research Database (Denmark)

    Nielsen, Bo

    2011-01-01

    This article examines the dynamic and multi-dimensional nature of trust in strategic alliances. Adopting a co-evolutionary approach, I developed a framework to show how trust, conceptualised in different forms, plays distinct roles at various evolutionary stages of the alliance relationship...

  13. Grappling with Strategic Dissonance.

    Science.gov (United States)

    Dowie, Sandra

    2002-01-01

    Presents a case study of the Virtual Retina project (an instructional CD-ROM for ophthalmology students) at the University of Alberta as an example of strategic dissonance in an educational technology unit. Offers methods to analyze the external competitive environment and internal capabilities of educational technology units. (EV)

  14. A Strategic Planning Workbook.

    Science.gov (United States)

    Austin, William

    This workbook outlines the Salem Community College's (New Jersey) Strategic Planning Initiative (SPI), which will enable the college to enter the 21st Century as an active agent in the educational advancement of the Salem community. SPI will allow college faculty, staff, students, and the local community to reflect on the vitality of the college…

  15. Strategic planning for marketers.

    Science.gov (United States)

    Wilson, I

    1978-12-01

    The merits of strategic planning as a marketing tool are discussed in this article which takes the view that although marketers claim to be future-oriented, they focus too little attention on long-term planning and forecasting. Strategic planning, as defined by these authors, usually encompasses periods of between five and twenty-five years and places less emphasis on the past as an absolute predictor of the future. It takes a more probabilistic view of the future than conventional marketing strategy and looks at the corporation as but one component interacting with the total environment. Inputs are examined in terms of environmental, social, political, technological and economic importance. Because of its futuristic orientation, an important tenant of strategic planning is the preparation of several alternative scenarios ranging from most to least likely. By planning for a wide-range of future market conditions, a corporation is more able to be flexible by anticipating the course of future events, and is less likely to become a captive reactor--as the authors believe is now the case. An example of strategic planning at General Elecric is cited.

  16. Strategic Marketing for Agribusiness.

    Science.gov (United States)

    Welch, Mary A., Ed.

    1993-01-01

    The steps for strategic market planning are discussed including: (1) assessing the situation with market conditions, customers, competitors, and your firm; and (2) crafting a strategy to prioritize target markets, develop a core strategy, and create a marketing mix. Examples of agribusiness successes are presented. The booklet concludes with a…

  17. EMSL Strategic Plan 2008

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, Allison A.

    2008-08-15

    This Strategic Plan is EMSL’s template for achieving our vision of simultaneous excellence in all aspects of our mission as a national scientific user facility. It reflects our understanding of the long-term stewardship we must work toward to meet the scientific challenges of the Department of Energy and the nation. During the next decade, we will implement the strategies contained in this Plan, working closely with the scientific community, our advisory committees, DOE’s Office of Biological and Environmental Research, and other key stakeholders. This Strategic Plan is fully aligned with the strategic plans of DOE and its Office of Science. We recognize that shifts in science and technology, national priorities, and resources made available through the Federal budget process create planning uncertainties and, ultimately, a highly dynamic planning environment. Accordingly, this Strategic Plan should be viewed as a living document for which we will continually evaluate changing needs and opportunities posed by our stakeholders (i.e., DOE, users, staff, advisory committees), work closely with them to understand and respond to those changes, and align our strategy accordingly.

  18. TACITUS: Text Understanding for Strategic Computing

    Science.gov (United States)

    1990-11-01

    problems of suprasegmental phonology will be left for another paper. 3 Backwards Rules I shall start by making explicit what it means to apply a... suprasegmental issues like stress. The goal of this paper is to contrast two different ways of doing segmental phonology. Both would presumably benefit

  19. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  20. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  1. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  2. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  3. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  4. COMPUTING

    CERN Document Server

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  6. Strategic Management and Business Analysis

    CERN Document Server

    Williamson, David; Jenkins, Wyn; Moreton, Keith Michael

    2003-01-01

    Strategic Business Analysis shows students how to carry out a strategic analysis of a business, with clear guidelines on where and how to apply the core strategic techniques and models that are the integral tools of strategic management.The authors identify the key questions in strategic analysis and provide an understandable framework for answering these questions.Several case studies are used to focus understanding and enable a more thorough analysis of the concepts and issues, especially useful for students involved with case study analysis.Accompanying the text is a CD-Rom containing the m

  7. Strategic Planning in U.S. Municipalities

    Directory of Open Access Journals (Sweden)

    James VAN RAVENSWAY

    2015-12-01

    Full Text Available Strategic planning started in the U.S. as a corporate planning endeavor. By the 1960’s, it had become a major corporate management tool in the Fortune 500. At fi rst, it was seen as a way of interweaving policies, values and purposes with management, resources and market information in a way that held the organization together. By the 1950’s, the concept was simplifi ed somewhat to focus on SWOT as a way of keeping the corporation afl oat in a more turbulent world. The public sector has been under pressure for a long time to become more effi cient, effective and responsive. Many have felt that the adoption of business practices would help to accomplish that. One tool borrowed from business has been strategic planning. At the local government level, strategic planning became popular starting in the 1980’s, and the community’s planning offi ce was called on to lead the endeavor. The planning offi ce was often the advocate of the process. Urban planning offi ces had been doing long-range plans for decades, but with accelerating urban change a more rapid action-oriented response was desired. The paper describes this history and process in the East Lansing, Michigan, U.S., where comprehensive community plans are the result of a multi-year visioning process and call for action- oriented, strategies for targeted parts of the community.

  8. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  9. Strategizing Communication. Theory and Practice

    DEFF Research Database (Denmark)

    Gulbrandsen, Ib Tunby; Just, Sine Nørholm

    Strategizing Communication offers a unique perspective on the theory and practice of strategic communication. Written for students and practitioners interested in learning about and acquiring tools for dealing with the technological, environmental and managerial challenges, which organizations face...... when communicating in today’s mediascape, this book presents an array of theories, concepts and models through which we can understand and practice communication strategically. The core of the argument is in the title: strategizing – meaning the act of making something strategic. This entails looking...... beyond, but not past instrumental, rational plans in order to become better able to understand and manage the concrete, incremental practices and contexts in which communication becomes strategic. Thus, we argue that although strategic communicators do (and should) make plans, a plan in itself does...

  10. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  11. The Strategic Mediator

    DEFF Research Database (Denmark)

    Rossignoli, Cecilia; Carugati, Andrea; Mola, Lapo

    2009-01-01

    The last 10 years have witnessed the emergence of electronic marketplaces as players that leverage new technologies to facilitate B2B internet-mediated collaborative business. Nowadays these players are augmenting their services from simple intermediation to include new inter-organizational relat......The last 10 years have witnessed the emergence of electronic marketplaces as players that leverage new technologies to facilitate B2B internet-mediated collaborative business. Nowadays these players are augmenting their services from simple intermediation to include new inter......-marketplace assumes the paradoxical role of strategic mediator: an agent who upholds and heightens the fences of the transactions instead of leveling them. The results have implication in shaping how we see the role of technology as strategic or commoditized....

  12. Scope of strategic marketing

    Directory of Open Access Journals (Sweden)

    Bradley Frank

    2004-01-01

    Full Text Available Marketing is a philosophy that leads to the process by which organizations, groups and individuals obtain what they need and want by identifying value, providing it, communicating it and delivering it to others. The core concepts of marketing are customers needs, wants and values; products, exchange, communications and relationships. Marketing is strategically concerned with the direction and scope of the long-term activities performed by the organization to obtain a competitive advantage. The organization applies its resources within a changing environment to satisfy customer needs while meeting stakeholder expectations. Implied in this view of strategic marketing is the requirement to develop a strategy to cope with competitors, identify market opportunities, develop and commercialize new products and services, allocate resources among marketing activities and design an appropriate organizational structure to ensure the perform once desired is achieved.

  13. Strategic innovation portfolio management

    Directory of Open Access Journals (Sweden)

    Stanković Ljiljana

    2015-01-01

    Full Text Available In knowledge-based economy, strategic innovation portfolio management becomes more and more important and critical factor of enterprise's success. Value creation for all the participants in value chain is more successful if it is based on efficient resource allocation and improvement of innovation performances. Numerous researches have shown that companies with best position on the market found their competitiveness on efficient development and exploitation of innovations. In decision making process, enterprise's management is constantly faced with challenge to allocate resources and capabilities as efficiently as possible, in both short and long term. In this paper authors present preliminary results of realized empirical research related to strategic innovation portfolio management in ten chosen enterprises in Serbia. The structure of the paper includes the following parts: theoretical background, explanation of research purpose and methodology, discussion of the results and concluding remarks, including limitations and directions for further research.

  14. Strategic port development

    DEFF Research Database (Denmark)

    Olesen, Peter Bjerg; Dukovska-Popovska, Iskra; Hvolby, Hans-Henrik

    2014-01-01

    developments. This paper examines a series of models from the port development literature and then proposes an approach for conceptualizing the strategic development of a port’s collaboration with local operators and the local hinterland based on connected development steps. The paper is based on a literature...... review relevant to international port development and a case study done in a Danish port as part of the main authors PhD project. The proposed model provides a strategic approach to control and improve the development of a port system and the connected hinterland. While the model is generic in its......While large global ports are recognised as playing a central role in many supply chains as logistic gateways, smaller regional ports have been more stagnant and have not reached the same level of development as the larger ports. The research literature in relation to port development is also...

  15. Accelerated Unification

    OpenAIRE

    Arkani-Hamed, Nima; Cohen, Andrew; Georgi, Howard

    2001-01-01

    We construct four dimensional gauge theories in which the successful supersymmetric unification of gauge couplings is preserved but accelerated by N-fold replication of the MSSM gauge and Higgs structure. This results in a low unification scale of $10^{13/N}$ TeV.

  16. Strategic Transfer Pricing

    OpenAIRE

    Michael Alles; Srikant Datar

    1998-01-01

    Most research into cost systems has focused on their motivational implications. This paper takes a different approach, by developing a model where two oligopolistic firms strategically select their cost-based transfer prices. Duopoly models frequently assume that firms game on their choice of prices. Product prices, however, are ultimately based on the firms' transfer prices that communicate manufacturing costs to marketing departments. It is for this reason that transfer prices will have a s...

  17. Thinking strategically about assessment

    OpenAIRE

    Mutch, A

    2002-01-01

    Drawing upon the literature on strategy formulation in organisations, this paper argues for a focus on strategy as process. It relates this to the need to think strategically about assessment, a need engendered by resource pressures, developments in learning and the demands of external stakeholders. It is argued that in practice assessment strategies are often formed at the level of practice, but that this produces contradiction and confusion at higher levels. Such tensions cannot be managed ...

  18. Naming as Strategic Communication

    DEFF Research Database (Denmark)

    Schmeltz, Line; Kjeldsen, Anna Karina

    2016-01-01

    This article presents a framework for understanding corporate name change as strategic communication. From a corporate branding perspective, the choice of a new name can be seen as a wish to stand out from a group of similar organizations. Conversely, from an institutional perspective, name change....... Second, it offers practical support to organizations, private as well as public, who find themselves in a situation where changing the name of the organization could be a way to reach either communicative or organizational goals....

  19. Strategic Stability: Contending Interpretations

    Science.gov (United States)

    2013-02-01

    John Hillas, Mathijus Jansen , Jos Potters, and Dries Ver- meulen, “On the Relation Among Some Definitions of Strategic Stability,” Mathematics of...of those doors. Not only have basic and applied sci- ences grown closer together in many fields, but theo - retical and experimental sciences have...science” bounds problems and provides insights. From the perspective of even the most theo - retical science, the wormhole camera postulated by Sir

  20. Strategic Leadership towards Sustainability

    OpenAIRE

    Robèrt, Karl-Henrik; Broman, Göran; Waldron, David; Ny, Henrik; Byggeth, Sophie; Cook, David; Johansson, Lena; Oldmark, Jonas; Basile, George; Haraldsson, Hördur V.

    2004-01-01

    The Master's programme named "Strategic Leadership Towards Sustainability" is offered at the Blekinge Institute of Technology (Blekinge Tekniska Högskola) in Karlskrona, Sweden. This Master's programme builds on four central themes: (1) four scientific principles for socio-ecological sustainability; (2) a planning methodology of "backcasting" based on those scientific principles for sustainability; (3) a five-level model for planning in complex systems, into which backcasting is incorporated ...

  1. Strategic Human Resources Management

    OpenAIRE

    Marta Muqaj

    2016-01-01

    Strategic Human Resources Management (SHRM) represents an important and sensitive aspect of the functioning and development of a company, business, institution, state, public or private agency of a country. SHRM is based on a point of view of the psychological practices, especially by investing on empowerment, broad training and teamwork. This way it remains the primary resource to maintain stability and competitiveness. SHRM has lately evolved on fast and secure steps, and the transformation...

  2. Employee flourishing strategic framework

    OpenAIRE

    Stelzner, Samuel Georg Eric; Schutte, Corne S. L.

    2016-01-01

    This paper produces a preliminary version of a strategic framework for managing employee flourishing. ‘Flourishing’, a term from positive psychology, describes the experience of ‘the good life’. Providing this experience benefits employees. It also motivates them to sustain the enterprise that provides it. This positions employee flourishing as a strategy for long-term enterprise performance, a key concern of industrial engineering. The framework incorporates a systems approach and literature...

  3. Strategic Appraisal 1996.

    Science.gov (United States)

    1996-01-01

    Haiti 274 Mexico 276 Venezuela 277 Panama 278 Brazil. 279 Drug Interdiction 280 Conclusion 281 Chapter Twelve, AshleyJ. Tellis SOUTH ASIA... Mexico or Saudi Arabia—and others might so chal- lenge American values as practically to require U.S. military involve- ment. The United States...slowly and grudgingly, but he has almost surely made his basic strategic decision to reach a peaceful settlement—even if the posturing and haggling will

  4. Strategic Team AI Path Plans: Probabilistic Pathfinding

    Directory of Open Access Journals (Sweden)

    Tng C. H. John

    2008-01-01

    Full Text Available This paper proposes a novel method to generate strategic team AI pathfinding plans for computer games and simulations using probabilistic pathfinding. This method is inspired by genetic algorithms (Russell and Norvig, 2002, in that, a fitness function is used to test the quality of the path plans. The method generates high-quality path plans by eliminating the low-quality ones. The path plans are generated by probabilistic pathfinding, and the elimination is done by a fitness test of the path plans. This path plan generation method has the ability to generate variation or different high-quality paths, which is desired for games to increase replay values. This work is an extension of our earlier work on team AI: probabilistic pathfinding (John et al., 2006. We explore ways to combine probabilistic pathfinding and genetic algorithm to create a new method to generate strategic team AI pathfinding plans.

  5. THE MODELS OF STRATEGIC MANAGEMENT OF INFOCOMM BUSINESS

    Directory of Open Access Journals (Sweden)

    M. A. Lyashenko

    2015-01-01

    and communication business made in article of the analysis one general idea of formation of strategy of management of infocommunication business which consists in full recognition of inevitability of globalization processes in the modern world at the accelerated development of information technologies was selected. In these conditions, the companies use such strategic means of the competition, as: increase of productivity, mastering of the new markets, creation of new business models and attraction of talents on a global scale.

  6. Particle Accelerators in China

    Science.gov (United States)

    Zhang, Chuang; Fang, Shouxian

    As the special machines that can accelerate charged particle beams to high energy by using electromagnetic fields, particle accelerators have been widely applied in scientific research and various areas of society. The development of particle accelerators in China started in the early 1950s. After a brief review of the history of accelerators, this article describes in the following sections: particle colliders, heavy-ion accelerators, high-intensity proton accelerators, accelerator-based light sources, pulsed power accelerators, small scale accelerators, accelerators for applications, accelerator technology development and advanced accelerator concepts. The prospects of particle accelerators in China are also presented.

  7. MUON ACCELERATION

    Energy Technology Data Exchange (ETDEWEB)

    BERG,S.J.

    2003-11-18

    One of the major motivations driving recent interest in FFAGs is their use for the cost-effective acceleration of muons. This paper summarizes the progress in this area that was achieved leading up to and at the FFAG workshop at KEK from July 7-12, 2003. Much of the relevant background and references are also given here, to give a context to the progress we have made.

  8. 7 CFR 25.202 - Strategic plan.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 1 2010-01-01 2010-01-01 false Strategic plan. 25.202 Section 25.202 Agriculture... Procedure § 25.202 Strategic plan. (a) Principles of strategic plan. The strategic plan included in the application must be developed in accordance with the following four key principles: (1) Strategic vision for...

  9. Strategic Planning: Shaping Future Success

    Science.gov (United States)

    2016-09-01

    PMs should already be conducting strategic planning for the long-term sustainment of their system. Maintenance planning, source of repair, and... Strategic planning helped solve this problem. From Sole Source to Competition Some PMs deal with the challenge of breaking out of a sole- source ... strategic planning. For example, I worked a program that was stuck in a sole- source situation for decades. We developed a long-range plan that involved

  10. University Strategic Planning in Cameroon

    OpenAIRE

    Terfot Augustine Ngwana

    2003-01-01

    This article argues that the global, regional, and local realities can complement rather than contradict each other in the process of strategic planning for universities in Sub-Saharan Africa (SSA). Using the case of the University of Buea in Cameroon, it attempts to use the global trends of polarisation in knowledge production capacity as an input or tool for identifying strategic choice in the process of strategic planning in institutions. The national policy background is used to highlight...

  11. SYSTEM REFLEXIVE STRATEGIC MARKETING MANAGEMENT

    Directory of Open Access Journals (Sweden)

    A. Dligach

    2013-10-01

    Full Text Available This article reviews the System Reflexive paradigm of strategic marketing management, being based on the alignment of strategic economic interests of stakeholders, specifically, enterprise owners and hired managers, and consumers. The essence of marketing concept of management comes under review, along with the strategic management approaches to business, buildup and alignment of economic interests of business stakeholders. A roadmap for resolving the problems of modern marketing is proposed through the adoption of System Reflexive marketing theory.

  12. SYSTEM REFLEXIVE STRATEGIC MARKETING MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Andrii A. DLIGACH

    2012-07-01

    Full Text Available This article reviews the System Reflexive paradigm of strategic marketing management, being based on the alignment of strategic economic interests of stakeholders, specifically, enterprise owners and hired managers, and consumers. The essence of marketing concept of management comes under review, along with the strategic management approaches to business, buildup and alignment of economic interests of business stakeholders. A roadmap for resolving the problems of modern marketing is proposed through the adoption of System Reflexive marketing theory.

  13. Laser acceleration

    Science.gov (United States)

    Tajima, T.; Nakajima, K.; Mourou, G.

    2017-02-01

    The fundamental idea of Laser Wakefield Acceleration (LWFA) is reviewed. An ultrafast intense laser pulse drives coherent wakefield with a relativistic amplitude robustly supported by the plasma. While the large amplitude of wakefields involves collective resonant oscillations of the eigenmode of the entire plasma electrons, the wake phase velocity ˜ c and ultrafastness of the laser pulse introduce the wake stability and rigidity. A large number of worldwide experiments show a rapid progress of this concept realization toward both the high-energy accelerator prospect and broad applications. The strong interest in this has been spurring and stimulating novel laser technologies, including the Chirped Pulse Amplification, the Thin Film Compression, the Coherent Amplification Network, and the Relativistic Mirror Compression. These in turn have created a conglomerate of novel science and technology with LWFA to form a new genre of high field science with many parameters of merit in this field increasing exponentially lately. This science has triggered a number of worldwide research centers and initiatives. Associated physics of ion acceleration, X-ray generation, and astrophysical processes of ultrahigh energy cosmic rays are reviewed. Applications such as X-ray free electron laser, cancer therapy, and radioisotope production etc. are considered. A new avenue of LWFA using nanomaterials is also emerging.

  14. STRATEGIC COMMUNICATION IN MULTINATIONAL COMPANIES

    Directory of Open Access Journals (Sweden)

    Alexandrina Cristina VASILE

    2014-11-01

    Full Text Available The article intends to show how multinational companies gain market share and visibility by using the appropriate strategic communication. The study evaluate the base framework, analysis, tools, data sources, sets of improvement plans and results that some multinational companies obtain by using strategic communication. The analysed companies are American based mainly communicative corporations and it will be underlined the importance of communication in the current economic environment. The results will show how important strategic communication is along the information used and the strategic management in targeting the position in the market.

  15. Bucharest heavy ion accelerator facility

    Energy Technology Data Exchange (ETDEWEB)

    Ceausescu, V.; Dobrescu, S.; Duma, M.; Indreas, G.; Ivascu, M.; Papureanu, S.; Pascovici, G.; Semenescu, G.

    1986-02-15

    The heavy ion accelerator facility of the Heavy Ion Physics Department at the Institute of Physics and Nuclear Engineering in Bucharest is described. The Tandem accelerator development and the operation of the first stage of the heavy ion postaccelerating system are discussed. Details are given concerning the resonance cavities, the pulsing system matching the dc beam to the RF cavities and the computer control system.

  16. Strategic Human Resources Management

    Directory of Open Access Journals (Sweden)

    Marta Muqaj

    2016-07-01

    Full Text Available Strategic Human Resources Management (SHRM represents an important and sensitive aspect of the functioning and development of a company, business, institution, state, public or private agency of a country. SHRM is based on a point of view of the psychological practices, especially by investing on empowerment, broad training and teamwork. This way it remains the primary resource to maintain stability and competitiveness. SHRM has lately evolved on fast and secure steps, and the transformation from Management of Human Resources to SHRM is becoming popular, but it still remains impossible to exactly estimate how much SHRM has taken place in updating the practices of HRM in organizations and institutions in general. This manuscript aims to make a reflection on strategic management, influence factors in its practices on some organizations. Researchers aim to identify influential factors that play key roles in SHRM, to determine its challenges and priorities which lay ahead, in order to select the most appropriate model for achieving a desirable performance. SHRM is a key factor in the achievement of the objectives of the organization, based on HR through continuous performance growth, it’s a complex process, unpredictable and influenced by many outside and inside factors, which aims to find the shortest way to achieve strategic competitive advantages, by creating structure planning, organizing, thinking values, culture, communication, perspectives and image of the organization. While traditional management of HR is focused on the individual performance of employees, the scientific one is based on the organizational performance, the role of the HRM system as main factor on solving business issues and achievement of competitive advantage within its kind.

  17. Strategic planning and republicanism

    Directory of Open Access Journals (Sweden)

    Mazza Luigi

    2010-01-01

    Full Text Available The paper develops two main linked themes: (i strategic planning reveals in practice limits that are hard to overcome; (ii a complete planning system is efficacy only in the framework of a republican political, social and government culture. It is argued that the growing disappointment associated to strategic planning practices, may be due to excessive expectations, and the difficulties encountered by strategic planning are traced to three main issues: (a the relationship between politics and planning; (b the relationship between government and governance; and (c the relationship between space and socioeconomic development. Some authors recently supported an idea of development as consisting in the qualitative evolution of forms of social rationality and argued that a reflection about the relationships between physical transformations and visions of development could be a way of testing innovations. But such strong demands might be satisfied only if we manage to make a 'new social and territorial pact for development', recreating a social fabric imbued with shared values. The re-creation of a social fabric imbued with shared values requires a rich conception of the political community and the possibility that the moral purposes of the community may be incorporated by the state. All this is missing today. Outside a republican scheme planning activities are principally instruments for legitimising vested interests and facilitating their investments, and the resolution of the conflicts that arise between the planning decisions of the various levels of government becomes at least impracticable. A complete planning system can be practised if can be referred to the authority and syntheses expressed in and by statehood, which suggests that in a democratic system planning is republican by necessity rather than by choice.

  18. Strategic port development

    DEFF Research Database (Denmark)

    Olesen, Peter Bjerg; Dukovska-Popovska, Iskra; Steger-Jensen, Kenn;

    2012-01-01

    This paper proposes a framework for strategic development of a port’s collaboration with its hinterland. The framework is based on literature relevant to port development and undertakes market perspective by considering import/export data relevant for the region of interest. The series of steps...... proposed in the framework, provide ports with a systematic approach in finding possibilities for new business ventures and increasing integration with the hinterland. The framework is generic in its approach. A case study illustrates possible usage of the framework in terms of hinterland development....

  19. Beyond Strategic Vision

    CERN Document Server

    Cowley, Michael

    2012-01-01

    Hoshin is a system which was developed in Japan in the 1960's, and is a derivative of Management By Objectives (MBO). It is a Management System for determining the appropriate course of action for an organization, and effectively accomplishing the relevant actions and results. Having recognized the power of this system, Beyond Strategic Vision tailors the Hoshin system to fit the culture of North American and European organizations. It is a "how-to" guide to the Hoshin method for executives, managers, and any other professionals who must plan as part of their normal job. The management of an o

  20. Strategic Global Climate Command?

    Science.gov (United States)

    Long, J. C. S.

    2016-12-01

    Researchers have been exploring geoengineering because Anthropogenic GHG emissions could drive the globe towards unihabitability for people, wildlife and vegetation. Potential global deployment of these technologies is inherently strategic. For example, solar radiation management to reflect more sunlight might be strategically useful during a period of time where the population completes an effort to cease emissions and carbon removal technologies might then be strategically deployed to move the atmospheric concentrations back to a safer level. Consequently, deployment of these global technologies requires the ability to think and act strategically on the part of the planet's governments. Such capacity most definitely does not exist today but it behooves scientists and engineers to be involved in thinking through how global command might develop because the way they do the research could support the development of a capacity to deploy intervention rationally -- or irrationally. Internationalizing research would get countries used to working together. Organizing the research in a step-wise manner where at each step scientists become skilled at explaining what they have learned, the quality of the information they have, what they don't know and what more they can do to reduce or handle uncertainty, etc. Such a process can increase societal confidence in being able to make wise decisions about deployment. Global capacity will also be enhanced if the sceintific establishment reinvents misssion driven research so that the programs will identify the systemic issues invovled in any proposed technology and systematically address them with research while still encouraging individual creativity. Geoengineering will diverge from climate science in that geoengineering research needs to design interventions for some publically desirable goal and investigates whether a proposed intervention will acheive desired outcomes. The effort must be a systems-engineering design problem

  1. Employee flourishing strategic framework

    Directory of Open Access Journals (Sweden)

    Stelzner, Samuel Georg Eric

    2016-11-01

    Full Text Available This paper produces a preliminary version of a strategic framework for managing employee flourishing. ‘Flourishing’, a term from positive psychology, describes the experience of ‘the good life’. Providing this experience benefits employees. It also motivates them to sustain the enterprise that provides it. This positions employee flourishing as a strategy for long-term enterprise performance, a key concern of industrial engineering. The framework incorporates a systems approach and literature from a variety of bodies of knowledge, including organisational behaviour and human resource management. The framework includes a process, tools, and elements that assist enterprises to manage employee flourishing.

  2. Strategic Urban Governance

    DEFF Research Database (Denmark)

    Pagh, Jesper

    2014-01-01

    The days of long-term predict-and-provide planning that saw its heydays in the post-war decades are long gone. As our late-modern time presents us with an evermore complex and contrasting view of the world, planning has become a much more fragmented and ambivalent affair. That a country or a city...... should be run like a private corporation has increasingly become common sense, and thus the competition among entities – be it countries, regions or cities – to a greater and greater extent defines success and the means to achieve it. What has been collected under the umbrella term Strategic Urban...

  3. Strategic implementation plan

    Science.gov (United States)

    1989-01-01

    The Life Science Division of the NASA Office of Space Science and Applications (OSSA) describes its plans for assuring the health, safety, and productivity of astronauts in space, and its plans for acquiring further fundamental scientific knowledge concerning space life sciences. This strategic implementation plan details OSSA's goals, objectives, and planned initiatives. The following areas of interest are identified: operational medicine; biomedical research; space biology; exobiology; biospheric research; controlled ecological life support; flight programs and advance technology development; the life sciences educational program; and earth benefits from space life sciences.

  4. Strategizing y liderazgo

    OpenAIRE

    Marín Tuyá, Belén

    2013-01-01

    El desarrollo del strategizing, concepto introducido por Whittington (1996) que enfoca la estrategia en la práctica “cómo algo que las personas hacen”, surgió por la creciente insatisfacción con la investigación convencional en estrategia. Así mientras las personas realizaban la estrategia, las teorías se centraban en análisis multivariantes sobre los efectos de la estrategia en el rendimiento de la organización con una curiosa ausencia de los actores humanos. Con el objetivo de avanzar en el...

  5. The unfocused strategic vision.

    Science.gov (United States)

    Friedman, L H

    1997-01-01

    Integrated delivery systems are often seen as the answer to the question of how to deliver high quality health services to a defined population at the lowest possible cost. This case examines the birth, growth, and ultimate demise of one such system. At first glance, all of the elements necessary for a successful integration were present including visionary leadership and a well defined strategic plan. However, the senior managers did not foresee the problems that would result from a clash of organizational cultures, significant mistrust between and among staff and physicians, and inability to manage the emotional-cognitive landscape.

  6. Guam Strategic Energy Plan

    Energy Technology Data Exchange (ETDEWEB)

    Conrad, M. D.

    2013-07-01

    Describes various energy strategies available to Guam to meet the territory's goal of diversifying fuel sources and reducing fossil energy consumption 20% by 2020.The information presented in this strategic energy plan will be used by the Guam Energy Task Force to develop an energy action plan. Available energy strategies include policy changes, education and outreach, reducing energy consumption at federal facilities, and expanding the use of a range of energy technologies, including buildings energy efficiency and conservation, renewable electricity production, and alternative transportation. The strategies are categorized based on the time required to implement them.

  7. Strategic performance management evaluation for the Navy's SPLICE local area networks

    OpenAIRE

    Blankenship, David D.

    1985-01-01

    Approved for public release; distribution is unlimited This thesis investigates those aspects of network performance evaluation thought to pertain specifically to strategic performance management evaluation of the Navy's Stock Point Logistics Integrated Communications Environment (SPLICE) local area networks at stock point and inventory control point sites- Background is provided concerning the SPLICE Project, strategic management, computer performance evaluation tools...

  8. Accelerated Profile HMM Searches.

    Directory of Open Access Journals (Sweden)

    Sean R Eddy

    2011-10-01

    Full Text Available Profile hidden Markov models (profile HMMs and probabilistic inference methods have made important contributions to the theory of sequence database homology search. However, practical use of profile HMM methods has been hindered by the computational expense of existing software implementations. Here I describe an acceleration heuristic for profile HMMs, the "multiple segment Viterbi" (MSV algorithm. The MSV algorithm computes an optimal sum of multiple ungapped local alignment segments using a striped vector-parallel approach previously described for fast Smith/Waterman alignment. MSV scores follow the same statistical distribution as gapped optimal local alignment scores, allowing rapid evaluation of significance of an MSV score and thus facilitating its use as a heuristic filter. I also describe a 20-fold acceleration of the standard profile HMM Forward/Backward algorithms using a method I call "sparse rescaling". These methods are assembled in a pipeline in which high-scoring MSV hits are passed on for reanalysis with the full HMM Forward/Backward algorithm. This accelerated pipeline is implemented in the freely available HMMER3 software package. Performance benchmarks show that the use of the heuristic MSV filter sacrifices negligible sensitivity compared to unaccelerated profile HMM searches. HMMER3 is substantially more sensitive and 100- to 1000-fold faster than HMMER2. HMMER3 is now about as fast as BLAST for protein searches.

  9. Accelerators and the Accelerator Community

    Energy Technology Data Exchange (ETDEWEB)

    Malamud, Ernest; Sessler, Andrew

    2008-06-01

    In this paper, standing back--looking from afar--and adopting a historical perspective, the field of accelerator science is examined. How it grew, what are the forces that made it what it is, where it is now, and what it is likely to be in the future are the subjects explored. Clearly, a great deal of personal opinion is invoked in this process.

  10. Application of local area networks to accelerator control systems at the Stanford Linear Accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Fox, J.D.; Linstadt, E.; Melen, R.

    1983-03-01

    The history and current status of SLAC's SDLC networks for distributed accelerator control systems are discussed. These local area networks have been used for instrumentation and control of the linear accelerator. Network topologies, protocols, physical links, and logical interconnections are discussed for specific applications in distributed data acquisition and control system, computer networks and accelerator operations.

  11. accelerating cavity

    CERN Multimedia

    On the inside of the cavity there is a layer of niobium. Operating at 4.2 degrees above absolute zero, the niobium is superconducting and carries an accelerating field of 6 million volts per metre with negligible losses. Each cavity has a surface of 6 m2. The niobium layer is only 1.2 microns thick, ten times thinner than a hair. Such a large area had never been coated to such a high accuracy. A speck of dust could ruin the performance of the whole cavity so the work had to be done in an extremely clean environment.

  12. Impact accelerations

    Science.gov (United States)

    Vongierke, H. E.; Brinkley, J. W.

    1975-01-01

    The degree to which impact acceleration is an important factor in space flight environments depends primarily upon the technology of capsule landing deceleration and the weight permissible for the associated hardware: parachutes or deceleration rockets, inflatable air bags, or other impact attenuation systems. The problem most specific to space medicine is the potential change of impact tolerance due to reduced bone mass and muscle strength caused by prolonged weightlessness and physical inactivity. Impact hazards, tolerance limits, and human impact tolerance related to space missions are described.

  13. Operationalizing strategic marketing.

    Science.gov (United States)

    Chambers, S B

    1989-05-01

    The strategic marketing process, like any administrative practice, is far simpler to conceptualize than operationalize within an organization. It is for this reason that this chapter focused on providing practical techniques and strategies for implementing the strategic marketing process. First and foremost, the marketing effort needs to be marketed to the various publics of the organization. This chapter advocated the need to organize the marketing analysis into organizational, competitive, and market phases, and it provided examples of possible designs of the phases. The importance and techniques for exhausting secondary data sources and conducting efficient primary data collection methods were explained and illustrated. Strategies for determining marketing opportunities and threats, as well as segmenting markets, were detailed. The chapter provided techniques for developing marketing strategies, including considering the five patterns of coverage available; determining competitor's position and the marketing mix; examining the stage of the product life cycle; and employing a consumer decision model. The importance of developing explicit objectives, goals, and detailed action plans was emphasized. Finally, helpful hints for operationalizing the communication variable and evaluating marketing programs were provided.

  14. The Science of Strategic Communication

    Science.gov (United States)

    The field of Strategic Communication involves a focused effort to identify, develop, and present multiple types of communication media on a given subject. A Strategic Communication program recognizes the limitations of the most common communication models (primarily “one s...

  15. Strategic Marketing for Educational Systems.

    Science.gov (United States)

    Hanson, E. Mark; Henry, Walter

    1992-01-01

    Private-sector strategic marketing processes can significantly benefit schools desiring to develop public confidence and support and establish guidelines for future development. This article defines a strategic marketing model for school systems and articulates the sequence of related research and operational steps comprising it. Although schools…

  16. Always Strategic: Jointly Essential Landpower

    Science.gov (United States)

    2015-02-01

    at the Opportunities tab . ***** All Strategic Studies Institute (SSI) and U.S. Army War College (USAWC) Press publications may be downloaded free...case to be made for emphasizing their extraordinary relative importance. It is necessary to differentiate between general and contextually ... contextual reality almost always seems compatible with at least a shortlist of possibly appropriate alternative strategic approaches. That said, perhaps

  17. Strategic Planning Is an Oxymoron

    Science.gov (United States)

    Bassett, Patrick F.

    2012-01-01

    The thinking on "strategic thinking" has evolved significantly over the years. In the previous century, the independent school strategy was to focus on long-range planning, blithely projecting 10 years into the future. For decades this worked well enough, but in the late 20th century, independent schools shifted to "strategic planning," with its…

  18. Tax rates as strategic substitutes

    NARCIS (Netherlands)

    H. Vrijburg (Hendrik); R.A. de Mooij (Ruud)

    2016-01-01

    textabstractThis paper analytically derives conditions under which the slope of the tax-reaction function is negative in a classical tax competition model. If countries maximize welfare, a negative slope (reflecting strategic substitutability) occurs under relatively mild conditions. The strategic t

  19. Strategic Interactions in Franchise Relationships

    NARCIS (Netherlands)

    Croonen, Evelien Petronella Maria

    2006-01-01

    This dissertation deals with understanding strategic interactions between franchisors and franchisees. The empirical part of this study consists of in-depth case studies in four franchise systems in the Dutch drugstore industry. The case studies focus on a total of eight strategic change processes i

  20. Strategic Management: A Comprehensive Bibliography.

    Science.gov (United States)

    Chaffee, Ellen Earle; de Alba, Renee

    A bibliography on strategic management is presented to assist both practitioners and researchers. Criteria for inclusion were as follows: (1) general in scope, providing introductory information on a variety of subtopics within strategic management; (2) indications that the work is becoming a classic (i.e., frequent citations by other authors);…

  1. Strategic Sealift Supporting Army Deployments

    Science.gov (United States)

    2016-06-10

    STRATEGIC SEALIFT SUPPORTING ARMY DEPLOYMENTS A thesis presented to the Faculty of the U.S. Army Command and General Staff...THOMPSON, MAJ, US ARMY BFA, Louisiana Tech University, Ruston, Louisiana, 1994 Fort Leavenworth, Kansas 2016 Approved for...Strategic Sealift Supporting Army Deployments 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Matthew

  2. Strategic directions in tissue engineering.

    NARCIS (Netherlands)

    Johnson, P.C.; Mikos, A.G.; Fisher, J.P.; Jansen, J.A.

    2007-01-01

    The field of tissue engineering is developing rapidly. Given its ultimate importance to clinical care, the time is appropriate to assess the field's strategic directions to optimize research and development activities. To characterize strategic directions in tissue engineering, a distant but reachab

  3. Strategic Aspects of Cost Management

    Directory of Open Access Journals (Sweden)

    Angelika I. Petrova

    2013-01-01

    Full Text Available This report is a summary of a research done on the area of Strategic Cost Management (SCM. This report includes a detailed discussion and application of Life Cycle Costing (LCC which a company can use to achieve its strategic objects in today's dynamic business environment. Hence, the main focus of this report is on LCC as mentioned

  4. NASA Space Sciences Strategic Planning

    Science.gov (United States)

    Crane, Philippe

    2004-01-01

    The purpose of strategic planning roadmap is to:Fulfill the strategic planning requirements; Provide a guide to the science community in presenting research requests to NASA; Inform and inspire; Focus investments in technology and research for future missions; and Provide the scientific and technical justification for augmentation requests.

  5. Strategic Planning Is an Oxymoron

    Science.gov (United States)

    Bassett, Patrick F.

    2012-01-01

    The thinking on "strategic thinking" has evolved significantly over the years. In the previous century, the independent school strategy was to focus on long-range planning, blithely projecting 10 years into the future. For decades this worked well enough, but in the late 20th century, independent schools shifted to "strategic planning," with its…

  6. Strategic Human Resource Development. Symposium.

    Science.gov (United States)

    2002

    This document contains three papers on strategic human resource (HR) development. "Strategic HR Orientation and Firm Performance in India" (Kuldeep Singh) reports findings from a study of Indian business executives that suggests there is a positive link between HR policies and practices and workforce motivation and loyalty and…

  7. IS and Business Leaders' Strategizing

    DEFF Research Database (Denmark)

    Hansen, Anne Mette

    , and productivity. However, strategizing in such dynamic environments is not straightforward process. While IS and business leaders must develop new IS strategic objectives and move quickly towards new opportunities, they must also be good at exploiting the value of current assets and reducing the costs of existing...

  8. Transfers, Contracts and Strategic Games

    NARCIS (Netherlands)

    Kleppe, J.; Hendrickx, R.L.P.; Borm, P.E.M.; Garcia-Jurado, I.; Fiestras-Janeiro, G.

    2007-01-01

    This paper analyses the role of transfer payments and strategic con- tracting within two-person strategic form games with monetary pay- offs. First, it introduces the notion of transfer equilibrium as a strat- egy combination for which individual stability can be supported by allowing the possibilit

  9. Tax rates as strategic substitutes

    NARCIS (Netherlands)

    H. Vrijburg (Hendrik); R.A. de Mooij (Ruud)

    2016-01-01

    textabstractThis paper analytically derives conditions under which the slope of the tax-reaction function is negative in a classical tax competition model. If countries maximize welfare, a negative slope (reflecting strategic substitutability) occurs under relatively mild conditions. The strategic t

  10. Strategic Planning for Higher Education.

    Science.gov (United States)

    Kotler, Philip; Murphy, Patrick E.

    1981-01-01

    The framework necessary for achieving a strategic planning posture in higher education is outlined. The most important benefit of strategic planning for higher education decision makers is that it forces them to undertake a more market-oriented and systematic approach to long- range planning. (Author/MLW)

  11. Energy Innovation Acceleration Program

    Energy Technology Data Exchange (ETDEWEB)

    Wolfson, Johanna [Fraunhofer USA Inc., Center for Sustainable Energy Systems, Boston, MA (United States)

    2015-06-15

    The Energy Innovation Acceleration Program (IAP) – also called U-Launch – has had a significant impact on early stage clean energy companies in the Northeast and on the clean energy economy in the Northeast, not only during program execution (2010-2014), but continuing into the future. Key results include: Leverage ratio of 105:1; $105M in follow-on funding (upon $1M investment by EERE); At least 19 commercial products launched; At least 17 new industry partnerships formed; At least $6.5M in revenue generated; >140 jobs created; 60% of assisted companies received follow-on funding within 1 year of program completion; In addition to the direct measurable program results summarized above, two primary lessons emerged from our work executing Energy IAP:; Validation and demonstration awards have an outsized, ‘tipping-point’ effect for startups looking to secure investments and strategic partnerships. An ecosystem approach is valuable, but an approach that evaluates the needs of individual companies and then draws from diverse ecosystem resources to fill them, is most valuable of all.

  12. ABSTRACTS Preliminary Study of Strategic Inner Cores

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    When a strategic entity attempts to make a dicision, first the project must be m accoroance wlm its strategic framework as well as make the strategic inner cores prominent. The existing theories of development strategy indicate that the formation of the framework can be divided into the following parts: inside and outside environments, purpose, goal, key points, and countermeasures. The strategic inner cores that put forward by this paper is the intensification and advancement for the theory of strategic framework, strategic orientation, strategic vision and main line are inciuded. Appearance of these ideas have improved the theory and enhanced strategic practice.

  13. STRATEGIC PLANNING AT SPORTS ORGANIZATIONS

    Directory of Open Access Journals (Sweden)

    Radovan Ilić

    2013-10-01

    Full Text Available The article defines the terminology of the strategic planning at sports organizations and puts an accent on its specifics. The first part explains what is planning and its functions in the strategic management in order to further put a light on the theoretic terminology of strategic planning and strategic management as well as to explain the relation between them. In the second part the phases of the planning in sports are revised as follows: (1 preplanning phase, (2 strategy formulating phase, (3 implementing strategy phase, and (4 evaluation and control of the planned assignments. The last part of the article is dedicated to concluding revisions. The conclusions from the researches of this complex problematic are given by number in a long-term view of the strategic planning in sport organizations.

  14. Massively parallel computational fluid dynamics calculations for aerodynamics and aerothermodynamics applications

    Energy Technology Data Exchange (ETDEWEB)

    Payne, J.L.; Hassan, B.

    1998-09-01

    Massively parallel computers have enabled the analyst to solve complicated flow fields (turbulent, chemically reacting) that were previously intractable. Calculations are presented using a massively parallel CFD code called SACCARA (Sandia Advanced Code for Compressible Aerothermodynamics Research and Analysis) currently under development at Sandia National Laboratories as part of the Department of Energy (DOE) Accelerated Strategic Computing Initiative (ASCI). Computations were made on a generic reentry vehicle in a hypersonic flowfield utilizing three different distributed parallel computers to assess the parallel efficiency of the code with increasing numbers of processors. The parallel efficiencies for the SACCARA code will be presented for cases using 1, 150, 100 and 500 processors. Computations were also made on a subsonic/transonic vehicle using both 236 and 521 processors on a grid containing approximately 14.7 million grid points. Ongoing and future plans to implement a parallel overset grid capability and couple SACCARA with other mechanics codes in a massively parallel environment are discussed.

  15. Multinational Corporation and International Strategic Alliance

    Institute of Scientific and Technical Information of China (English)

    陆兮

    2015-01-01

    The world is now deeply into the second great wave of globalization, in which product, capital, and markets are becoming more and more integrated across countries. Multinational corporations are gaining their rapid growth around the globe and playing a significant role in the world economy. Meanwhile, the accelerated rate of globalization has also imposed pressures on MNCs, left them desperately seeking overseas alliances in order to remain competitive. International strategic alliances, which bring together large and commonly competitive firms for specific purposes, have gradual y shown its importance in the world market. And the form of international joint venture is now widely adopted. Then after the formation of alliances, selecting the right partner, formulating right strategies, establishing harmonious and effective partnership are generally the key to success.

  16. Capabilities for Strategic Adaptation

    DEFF Research Database (Denmark)

    Distel, Andreas Philipp

    This dissertation explores capabilities that enable firms to strategically adapt to environmental changes and preserve competitiveness over time – often referred to as dynamic capabilities. While dynamic capabilities being a popular research domain, too little is known about what these capabilities...... empirical studies through the dynamic capabilities lens and develops propositions for future research. The second paper is an empirical study on the origins of firm-level absorptive capacity; it explores how organization-level antecedents, through their impact on individual-level antecedents, influence...... firms’ ability to absorb and leverage new knowledge. The third paper is an empirical study which conceptualizes top managers’ resource cognition as a managerial capability underlying firms’ resource adaptation; it empirically examines the performance implications of this capability and organizational...

  17. Strategic port development

    DEFF Research Database (Denmark)

    Olesen, Peter Bjerg; Dukovska-Popovska, Iskra; Hvolby, Hans-Henrik

    2014-01-01

    While large global ports are recognised as playing a central role in many supply chains as logistic gateways, smaller regional ports have been more stagnant and have not reached the same level of development as the larger ports. The research literature in relation to port development is also...... developments. This paper examines a series of models from the port development literature and then proposes an approach for conceptualizing the strategic development of a port’s collaboration with local operators and the local hinterland based on connected development steps. The paper is based on a literature...... approach, the authors apply it to a Danish case study that illustrates its potential usage in determining port development....

  18. Strategizing on innovation systems

    DEFF Research Database (Denmark)

    Jofre, Sergio

    mobilization. These disparities are less evident when comparing NIS in Japan and the US alone, suggesting a merging trend encouraged by mutual learning. Conversely, the average innovation performance in the EU is greatly affected by the heterogeneous performance among Member States, notably among those newly......, the common path observed in the US and Japan becomes more evident. In the EU this relationship varies greatly among States, however, an overall trend is identified. We observe that the predominant EU triple helix model is characterized by a strong link within government and university and a weaker link......This paper explores the strategic context of the implementation of the European Institute of Technology (EIT) from the perspective of National Innovation Systems (NIS) and the Triple Helix of University-Government-Industry relationship. The analytical framework is given by a comparative study...

  19. Accelerating abelian gauge dynamics

    CERN Document Server

    Adler, Stephen Louis

    1991-01-01

    In this paper, we suggest a new acceleration method for Abelian gauge theories based on linear transformations to variables which weight all length scales equally. We measure the autocorrelation time for the Polyakov loop and the plaquette at β=1.0 in the U(1) gauge theory in four dimensions, for the new method and for standard Metropolis updates. We find a dramatic improvement for the new method over the Metropolis method. Computing the critical exponent z for the new method remains an important open issue.

  20. Thread Group Multithreading: Accelerating the Computation of an Agent-Based Power System Modeling and Simulation Tool -- C GridLAB-D

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Shuangshuang; Chassin, David P.

    2014-01-06

    GridLAB-DTM is an open source next generation agent-based smart-grid simulator that provides unprecedented capability to model the performance of smart grid technologies. Over the past few years, GridLAB-D has been used to conduct important analyses of smart grid concepts, but it is still quite limited by its computational performance. In order to break through the performance bottleneck to meet the need for large scale power grid simulations, we develop a thread group mechanism to implement highly granular multithreaded computation in GridLAB-D. We achieve close to linear speedups on multithreading version compared against the single-thread version of the same code running on general purpose multi-core commodity for a benchmark simple house model. The performance of the multithreading code shows favorable scalability properties and resource utilization, and much shorter execution time for large-scale power grid simulations.

  1. FAMOUS, faster: using parallel computing techniques to accelerate the FAMOUS/HadCM3 climate model with a focus on the radiative transfer algorithm

    Directory of Open Access Journals (Sweden)

    P. Hanappe

    2011-09-01

    Full Text Available We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. Instead of the existing MPI-based domain decomposition, we used a task queue and a thread pool to schedule the computation of individual columns on the available processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations.

    The modified algorithm runs more than 50 times faster on the CELL's Synergistic Processing Element than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster. On the tested graphics processor, using OpenCL, we find a speed-up of more than 2.5 times as compared to the original code on the main CPU. Because the radiation code takes more than 60 % of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach. We estimate that this project required around two and a half man-years of work.

  2. FAMOUS, faster: using parallel computing techniques to accelerate the FAMOUS/HadCM3 climate model with a focus on the radiative transfer algorithm

    Directory of Open Access Journals (Sweden)

    P. Hanappe

    2011-06-01

    Full Text Available We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. A task queue and a thread pool are used to distribute the computation to several processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations.

    The modified algorithm runs more than 50 times faster on the CELL's Synergistic Processing Elements than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster and on graphics processors, using OpenCL, more than 2.5 times faster, as compared to the original code. Because the radiation code takes more than 60 % of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach.

  3. Complex Strategic Choices Applying Systemic Planning for Strategic Decision Making

    CERN Document Server

    Leleur, Steen

    2012-01-01

    Effective decision making requires a clear methodology, particularly in a complex world of globalisation. Institutions and companies in all disciplines and sectors are faced with increasingly multi-faceted areas of uncertainty which cannot always be effectively handled by traditional strategies. Complex Strategic Choices provides clear principles and methods which can guide and support strategic decision making to face the many current challenges. By considering ways in which planning practices can be renewed and exploring the possibilities for acquiring awareness and tools to add value to strategic decision making, Complex Strategic Choices presents a methodology which is further illustrated by a number of case studies and example applications. Dr. Techn. Steen Leleur has adapted previously established research based on feedback and input from various conferences, journals and students resulting in new material stemming from and focusing on practical application of a systemic approach. The outcome is a coher...

  4. New strategic roles of manufacturing

    DEFF Research Database (Denmark)

    Yang, Cheng; Johansen, John; Boer, Harry

    2008-01-01

    This paper aims to view manufacturing from a new angle, and tries to look beyond fit, focus and trade-offs, approaches which may no longer be sufficient for long-term competitive success. Four cases from different industries are described and used to illustrate and discuss the possibility...... of manufacturing playing new strategic roles. Backward, forward and lateral interactive support are suggested to explicate how manufacturing can realize its new strategic roles. Finally, four new strategic roles of manufacturing are suggested. They are: innovation manufacturing, ramp-up manufacturing, primary...... manufacturing, and service manufacturing....

  5. The Emerging Strategic Entrepreneurship Field

    DEFF Research Database (Denmark)

    Foss, Nicolai Juul; Lyngsie, Jacob

    The field of strategic entrepreneurship is a fairly recent one. Its central idea is that opportunity-seeking and advantage-seeking — the former the central subject of the entrepreneurship field, the latter the central subject of the strategic management field — are processes that need...... to be considered jointly. The purpose of this brief chapter is to explain the emergence of SE theory field in terms of a response to research gaps in the neighboring fields of entrepreneurship and strategic management; describe the main tenets of SE theory; discuss its relations to neighboring fields; and finally...

  6. Strategic Planning in Hungarian Municipalities

    Directory of Open Access Journals (Sweden)

    Izabella BARATI-STEC

    2015-12-01

    Full Text Available The paper gives a summary of the most recent literature on strategic planning in local governments, while placing the fi ndings in a historical context. It focuses on planning in postsocialist countries, the impact of the heritage ofdecades of central planning and the relationship between decentralization and planning. The study concludes that while strategic planning improves the performance of local governments, special aspects, such as the fi nancial dependency of municipalities, focus on daily operation and on short-term results and enhanced need of institutional and personal capacity management must be taken care of while implementing strategic planning in local governments.

  7. Application of Plasma Waveguides to High Energy Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Milchberg, Howard [Univ. of Maryland, College Park, MD (United States)

    2016-07-01

    This grant supported basic experimental, theoretical and computer simulation research into developing a compact, high pulse repetition rate laser accelerator using the direct laser acceleration mechanism in plasma-based slow wave structures.

  8. Particle Accelerator Focus Automation

    Directory of Open Access Journals (Sweden)

    Lopes José

    2017-08-01

    Full Text Available The Laboratório de Aceleradores e Tecnologias de Radiação (LATR at the Campus Tecnológico e Nuclear, of Instituto Superior Técnico (IST has a horizontal electrostatic particle accelerator based on the Van de Graaff machine which is used for research in the area of material characterization. This machine produces alfa (He+ and proton (H+ beams of some μA currents up to 2 MeV/q energies. Beam focusing is obtained using a cylindrical lens of the Einzel type, assembled near the high voltage terminal. This paper describes the developed system that automatically focuses the ion beam, using a personal computer running the LabVIEW software, a multifunction input/output board and signal conditioning circuits. The focusing procedure consists of a scanning method to find the lens bias voltage which maximizes the beam current measured on a beam stopper target, which is used as feedback for the scanning cycle. This system, as part of a wider start up and shut down automation system built for this particle accelerator, brings great advantages to the operation of the accelerator by turning it faster and easier to operate, requiring less human presence, and adding the possibility of total remote control in safe conditions.

  9. Particle Accelerator Focus Automation

    Science.gov (United States)

    Lopes, José; Rocha, Jorge; Redondo, Luís; Cruz, João

    2017-08-01

    The Laboratório de Aceleradores e Tecnologias de Radiação (LATR) at the Campus Tecnológico e Nuclear, of Instituto Superior Técnico (IST) has a horizontal electrostatic particle accelerator based on the Van de Graaff machine which is used for research in the area of material characterization. This machine produces alfa (He+) and proton (H+) beams of some μA currents up to 2 MeV/q energies. Beam focusing is obtained using a cylindrical lens of the Einzel type, assembled near the high voltage terminal. This paper describes the developed system that automatically focuses the ion beam, using a personal computer running the LabVIEW software, a multifunction input/output board and signal conditioning circuits. The focusing procedure consists of a scanning method to find the lens bias voltage which maximizes the beam current measured on a beam stopper target, which is used as feedback for the scanning cycle. This system, as part of a wider start up and shut down automation system built for this particle accelerator, brings great advantages to the operation of the accelerator by turning it faster and easier to operate, requiring less human presence, and adding the possibility of total remote control in safe conditions.

  10. 75 FR 18824 - Federal Advisory Committee; U.S. Strategic Command Strategic Advisory Group; Closed Meeting

    Science.gov (United States)

    2010-04-13

    ..., intelligence, and policy-related issues to the Commander, U.S. Strategic Command, during the development of the... of the Secretary Federal Advisory Committee; U.S. Strategic Command Strategic Advisory Group; Closed... announces that the U.S. Strategic Command Strategic Advisory Group will meet on May 6 and 7, 2010....

  11. The neoliberalisation of strategic spatial planning

    DEFF Research Database (Denmark)

    Olesen, Kristian

    2014-01-01

    Strategic spatial planning practices have recently taken a neoliberal turn in many northwestern European countries. This neoliberalisation of strategic spatial planning has materialised partly in governance reforms aiming to reduce or abolish strategic spatial planning at national and regional...... scales, and partly through the normalisation of neoliberal discourses in strategic spatial planning processes. This paper analyses the complex relationship, partly of unease and partly of coevolution, between neoliberalism and strategic spatial planning. Furthermore, the paper discusses the key...... challenges for strategic spatial planning in the face of neoliberalism and argues for a need to strengthen strategic spatial planning’s critical dimension....

  12. Abstract Acceleration of General Linear Loops

    OpenAIRE

    2014-01-01

    International audience; We present abstract acceleration techniques for computing loop invariants for numerical programs with linear assignments and conditionals. Whereas abstract interpretation techniques typically over-approximate the set of reachable states iteratively, abstract acceleration captures the effect of the loop with a single, non-iterative transfer function applied to the initial states at the loop head. In contrast to previous acceleration techniques, our approach applies to a...

  13. Control of robot dynamics using acceleration control

    Science.gov (United States)

    Workman, G. L.; Prateru, S.; Li, W.; Hinman, Elaine

    1992-01-01

    Acceleration control of robotic devices can provide improvements to many space-based operations using flexible manipulators and to ground-based operations requiring better precision and efficiency than current industrial robots can provide. This paper reports on a preliminary study of acceleration measurement on robotic motion during parabolic flights on the NASA KC-135 and a parallel study of accelerations with and without gravity arising from computer simulated motions using TREETOPS software.

  14. Standard test method for accelerated leach test for diffusive releases from solidified waste and a computer program to model diffusive, fractional leaching from cylindrical waste forms

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 This test method provides procedures for measuring the leach rates of elements from a solidified matrix material, determining if the releases are controlled by mass diffusion, computing values of diffusion constants based on models, and verifying projected long-term diffusive releases. This test method is applicable to any material that does not degrade or deform during the test. 1.1.1 If mass diffusion is the dominant step in the leaching mechanism, then the results of this test can be used to calculate diffusion coefficients using mathematical diffusion models. A computer program developed for that purpose is available as a companion to this test method (Note 1). 1.1.2 It should be verified that leaching is controlled by diffusion by a means other than analysis of the leach test solution data. Analysis of concentration profiles of species of interest near the surface of the solid waste form after the test is recommended for this purpose. 1.1.3 Potential effects of partitioning on the test results can...

  15. Strategic Arrivals Recommendation Tool Project

    Data.gov (United States)

    National Aeronautics and Space Administration — During the conduct of a NASA Research Announcement (NRA) in 2012 and 2013, the Mosaic ATM team first developed the Strategic Arrivals Recommendation Tool concept, or...

  16. Dominant investors and strategic transparency

    NARCIS (Netherlands)

    E.C. Perotti; E.-L. von Thadden

    1999-01-01

    This paper studies product market competition under a strategic transparency decision. Dominant investors can influence information collection in the financial market, and thereby corporate transparency, by affecting market liquidity or the cost of information collection. More transparency on a firm

  17. Dominant investors and strategic transparency

    NARCIS (Netherlands)

    E.C. Perotti; E.-L. von Thadden

    1998-01-01

    This paper studies product market competition under a strategic transparency decision. Dominant investors can influence information collection in the financial market, and thereby corporate transparency, by affecting market liquidity or the cost of information collection. More transparency on a firm

  18. Managing transdisciplinarity in strategic foresight

    DEFF Research Database (Denmark)

    Rasmussen, Birgitte; Andersen, Per Dannemand; Borch, Kristian

    2010-01-01

    in relation to transdisciplinarity based on empirical as well as theoretical work in technological domains. By strategic foresight is meant future oriented, participatory consultation of actors and stakeholders, both within and outside a scientific community. It therefore allows multiple stakeholders......Strategic foresight deals with the long term future and is a transdisciplinary exercise which, among other aims, addresses the prioritization of science and other decision making in science and innovation advisory and funding bodies. This article discusses challenges in strategic foresight...... strategic foresight has now been widely accepted for strategy-making and priority-setting in science and innovation policy, the methodologies underpinning it still need further development. Key findings are the identification of challenges, aspects and issues related to management and facilitation...

  19. The Strategic Process in Organisations

    DEFF Research Database (Denmark)

    Sørensen, Lene; Vidal, Rene Victor Valqui

    1999-01-01

    Organisational strategy development is often conceptualised through methodological frameworks. In this paper strategy development is seen as a strategic process characterised by inherent contradictions between actors, OR methods and the problem situation. The paper presents the dimensions...

  20. Strategic Leadership of Corporate Sustainability

    DEFF Research Database (Denmark)

    Strand, Robert

    2014-01-01

    ? What effects do corporate sustainability TMT positions have at their organizations? We consider these questions through strategic leadership and neoinstitutional theoretical frameworks. Through the latter, we also engage with Weberian considerations of bureaucracy. We find that the reasons why......Strategic leadership and corporate sustainability have recently come together in conspicuously explicit fashion through the emergence of top management team (TMT) positions with dedicated corporate sustainability responsibilities. These TMT positions, commonly referred to as 'Chief Sustainability...

  1. Executive presence for strategic influence.

    Science.gov (United States)

    Shirey, Maria R

    2013-01-01

    This department highlights change management strategies that may be successful in strategically planning and executing organizational change initiatives. With the goal of presenting practical approaches helpful to nurse leaders advancing organizational change, content includes evidence-based projects, tools, and resources that mobilize and sustain organizational change initiatives. In this article, the author discusses cultivating executive presence, a crucial component of great leadership, needed for strategic influence and to drive change.

  2. STRATEGIC ALLIANCES – THEIR DEFINITION AND FORMATION

    OpenAIRE

    Kinderis, Remigijus; Jucevičius, Giedrius

    2013-01-01

    The article presents analysis of the definition of strategic alliances, the analysis of alliance and the research of a strategic alliance concept; furthermore, it focuses on the contingent hierarchy of alliances. The motives of strategic alliances formation, their categories, groups and benefit for business have been revealed in this article. Special attention is paid to the process of strategic alliance formation and the analysis of factors that influence the formation of strategic alliances...

  3. 2011 Computation Directorate Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Crawford, D L

    2012-04-11

    From its founding in 1952 until today, Lawrence Livermore National Laboratory (LLNL) has made significant strategic investments to develop high performance computing (HPC) and its application to national security and basic science. Now, 60 years later, the Computation Directorate and its myriad resources and capabilities have become a key enabler for LLNL programs and an integral part of the effort to support our nation's nuclear deterrent and, more broadly, national security. In addition, the technological innovation HPC makes possible is seen as vital to the nation's economic vitality. LLNL, along with other national laboratories, is working to make supercomputing capabilities and expertise available to industry to boost the nation's global competitiveness. LLNL is on the brink of an exciting milestone with the 2012 deployment of Sequoia, the National Nuclear Security Administration's (NNSA's) 20-petaFLOP/s resource that will apply uncertainty quantification to weapons science. Sequoia will bring LLNL's total computing power to more than 23 petaFLOP/s-all brought to bear on basic science and national security needs. The computing systems at LLNL provide game-changing capabilities. Sequoia and other next-generation platforms will enable predictive simulation in the coming decade and leverage industry trends, such as massively parallel and multicore processors, to run petascale applications. Efficient petascale computing necessitates refining accuracy in materials property data, improving models for known physical processes, identifying and then modeling for missing physics, quantifying uncertainty, and enhancing the performance of complex models and algorithms in macroscale simulation codes. Nearly 15 years ago, NNSA's Accelerated Strategic Computing Initiative (ASCI), now called the Advanced Simulation and Computing (ASC) Program, was the critical element needed to shift from test-based confidence to science-based confidence

  4. Aerodynamics in arbitrarily accelerating frames: application to high-g turns

    CSIR Research Space (South Africa)

    Gledhill, Irvy MA

    2010-09-01

    Full Text Available Fifth-generation missiles accelerate up to 100 g in turns, and higher accelerations are expected as agility increases. The auhtors have developed the theory of aerodynamics for arbitrary accelerations, and have validated modelling in a Computational...

  5. TOPSIS Method for Determining The Priority of Strategic Training Program

    Directory of Open Access Journals (Sweden)

    Rohmatulloh Rohmatulloh

    2014-01-01

    Full Text Available The voice of stakeholders is an important issue for government or public organizations. The issue becomes an input in designing strategic program. Decision maker should evaluate the priority to get the importance level. The decision making process is a complex problem because it is influenced by many critetria. The purpose of this study is to solve multi-criteria decision making problem using TOPSIS method. This method is proposed due to its easy and simple computation process. The case sample is determining the strategic training program in energy and mineral resources field. TOPSIS analysis may be able to assist decision maker in allocating resources for the preparation of strategic training program in accordance with the priorities

  6. Maintenance Process Strategic Analysis

    Science.gov (United States)

    Jasiulewicz-Kaczmarek, M.; Stachowiak, A.

    2016-08-01

    The performance and competitiveness of manufacturing companies is dependent on the availability, reliability and productivity of their production facilities. Low productivity, downtime, and poor machine performance is often linked to inadequate plant maintenance, which in turn can lead to reduced production levels, increasing costs, lost market opportunities, and lower profits. These pressures have given firms worldwide the motivation to explore and embrace proactive maintenance strategies over the traditional reactive firefighting methods. The traditional view of maintenance has shifted into one of an overall view that encompasses Overall Equipment Efficiency, Stakeholders Management and Life Cycle assessment. From practical point of view it requires changes in approach to maintenance represented by managers and changes in actions performed within maintenance area. Managers have to understand that maintenance is not only about repairs and conservations of machines and devices, but also actions striving for more efficient resources management and care for safety and health of employees. The purpose of the work is to present strategic analysis based on SWOT analysis to identify the opportunities and strengths of maintenance process, to benefit from them as much as possible, as well as to identify weaknesses and threats, so that they could be eliminated or minimized.

  7. CINT 2020 Strategic Plan

    Energy Technology Data Exchange (ETDEWEB)

    Shinn, Neal D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-12-01

    CINT’s role is to enable world-leading science towards realizing these benefits and our strategic objectives describe what is needed to deliver on this promise. As a vibrant partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL), CINT leverages the unmatched scientific and engineering expertise of our host DOE Laboratories in an Office of Science open-access user facility to benefit hundreds of researchers annually. We have world-leading scientific expertise in four thrust areas, as described in section 1, and specialized capabilities to create, characterize and understand nanomaterials in increasingly complex integrated environments. Building upon these current strengths, we identify some of the capabilities and expertise that the nanoscience community will need in the future and that CINT is well positioned to develop and offer as a user facility. These include an expanding portfolio of our signature Discovery Platforms that can be used alone or as sophisticated “experiments within an experiment”; novel synthetic approaches for exquisitely heterostructured nanowires, nanoparticles and quasi-two-dimensional materials; ultra-high resolution spectroscopic techniques of nanomaterial dynamics; in situ microscopies that provide realtime, spatially-resolved structure/property information for increasingly complex materials systems; advanced simulation techniques for integrated nanomaterials; and multi-scale theory for interfaces and dynamics.

  8. Making strategic choices.

    Science.gov (United States)

    1993-01-01

    Many decision factors enter into making the right strategic choices in today's healthcare environment. Physicians have their perspective. Hospital managers may have a different perspective. Having worked on both sides of the equation, this author suggests that the successful design of an integrated healthcare system will depend on the ability of each to understand the other's agenda. Physician needs aren't necessarily incompatible with those of an organization and vice versa. Cultural differences aren't necessarily cast in stone. If hospital managers can develop an understanding of the physicians' decision factors and tailor a program around the key issues, there will be a greater likelihood of success. At the same time, physicians who are considering an alignment with a health system will need to understand what it will take to make the organization successful. The personal futures of these physicians may be at stake once they are in an integrated relationship. Finally, integrated systems will have new decision criteria for formulating strategy. Both sides should look forward to addressing mutual interests in creative ways.

  9. Anderson Acceleration for Fixed-Point Iterations

    Energy Technology Data Exchange (ETDEWEB)

    Walker, Homer F. [Worcester Polytechnic Institute, MA (United States)

    2015-08-31

    The purpose of this grant was to support research on acceleration methods for fixed-point iterations, with applications to computational frameworks and simulation problems that are of interest to DOE.

  10. Accelerated shallow water modeling

    Science.gov (United States)

    Gandham, Rajesh; Medina, David; Warburton, Timothy

    2015-04-01

    ln this talk we will describe our ongoing developments in accelerated numerical methods for modeling tsunamis, and oceanic fluid flows using two dimensional shallow water model and/or three dimensional incompressible Navier Stokes model discretized with high order discontinuous Galerkin methods. High order discontinuous Galerkin methods can be computationally demanding, requiring extensive computational time to simulate real time events on traditional CPU architectures. However, recent advances in computing architectures and hardware aware algorithms make it possible to reduce simulation time and provide accurate predictions in a timely manner. Hence we tailor these algorithms to take advantage of single instruction multiple data (SIMD) architecture that is seen in modern many core compute devices such as GPUs. We will discuss our unified and extensive many-core programming library OCCA that alleviates the need to completely re-design the solvers to keep up with constantly evolving parallel programming models and hardware architectures. We will present performance results for the flow simulations demonstrating performance leveraging multiple different multi-threading APIs on GPU and CPU targets.

  11. CLOUD COMPUTING AND SECURITY

    Directory of Open Access Journals (Sweden)

    Asharani Shinde

    2015-10-01

    Full Text Available This document gives an insight into Cloud Computing giving an overview of key features as well as the detail study of exact working of Cloud computing. Cloud Computing lets you access all your application and documents from anywhere in the world, freeing you from the confines of the desktop thus making it easier for group members in different locations to collaborate. Certainly cloud computing can bring about strategic, transformational and even revolutionary benefits fundamental to future enterprise computing but it also offers immediate and pragmatic opportunities to improve efficiencies today while cost effectively and systematically setting the stage for the strategic change. As this technology makes the computing, sharing, networking easy and interesting, we should think about the security and privacy of information too. Thus the key points we are going to be discussed are what is cloud, what are its key features, current applications, future status and the security issues and the possible solutions.

  12. A Study on the Effect of the Strategic Intelligence on Decision Making and Strategic Planning

    OpenAIRE

    Mahmoud Reza Esmaili

    2014-01-01

    The present research aims to recognize not only the effective factors on the strategic intelligence, strategic decision making and strategic planning but also it studies the effect of the strategic intelligence on the strategic decision making and strategic planning in organization and companies using the intelligence system in the Khorram-abad city. According to the results, this study is an analytical-survey research. The statistical population for the research is the companies and organiza...

  13. Whole scale change for real-time strategic application in complex health systems.

    Science.gov (United States)

    Shirey, Maria R; Calarco, Margaret M

    2014-11-01

    This department highlights change management strategies that may be successful in strategically planning and executing organizational change initiatives. In this article, the authors introduce Whole Scale Change™, an action learning approach that accelerates organizational transformation to meet the challenges of dynamic environments.

  14. Children's strategic theory of mind.

    Science.gov (United States)

    Sher, Itai; Koenig, Melissa; Rustichini, Aldo

    2014-09-16

    Human strategic interaction requires reasoning about other people's behavior and mental states, combined with an understanding of their incentives. However, the ontogenic development of strategic reasoning is not well understood: At what age do we show a capacity for sophisticated play in social interactions? Several lines of inquiry suggest an important role for recursive thinking (RT) and theory of mind (ToM), but these capacities leave out the strategic element. We posit a strategic theory of mind (SToM) integrating ToM and RT with reasoning about incentives of all players. We investigated SToM in 3- to 9-y-old children and adults in two games that represent prevalent aspects of social interaction. Children anticipate deceptive and competitive moves from the other player and play both games in a strategically sophisticated manner by 7 y of age. One game has a pure strategy Nash equilibrium: In this game, children achieve equilibrium play by the age of 7 y on the first move. In the other game, with a single mixed-strategy equilibrium, children's behavior moved toward the equilibrium with experience. These two results also correspond to two ways in which children's behavior resembles adult behavior in the same games. In both games, children's behavior becomes more strategically sophisticated with age on the first move. Beyond the age of 7 y, children begin to think about strategic interaction not myopically, but in a farsighted way, possibly with a view to cooperating and capitalizing on mutual gains in long-run relationships.

  15. OpenMP for Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Beyer, J C; Stotzer, E J; Hart, A; de Supinski, B R

    2011-03-15

    OpenMP [13] is the dominant programming model for shared-memory parallelism in C, C++ and Fortran due to its easy-to-use directive-based style, portability and broad support by compiler vendors. Similar characteristics are needed for a programming model for devices such as GPUs and DSPs that are gaining popularity to accelerate compute-intensive application regions. This paper presents extensions to OpenMP that provide that programming model. Our results demonstrate that a high-level programming model can provide accelerated performance comparable to hand-coded implementations in CUDA.

  16. Development of high quality electron beam accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Kando, Masaki; Dewa, Hideki; Kotaki, Hideyuki; Kondo, Shuji; Hosokai, Tomonao; Kanazawa, Shuhei; Yokoyama, Takashi; Nakajima, Kazuhisa [Advanced Photon Research Center, Kansai Research Establishment, Japan Atomic Energy Research Institute, Kizu, Kyoto (Japan)

    2000-03-01

    A design study on a high quality electron beam accelerator is described. This accelerator will be used for second generation experiments of laser wakefield acceleration, short x-ray generation, and other experiments of interaction of high intensity laser with an electron beam at Advanced Photon Research Center, Kansai Research Establishment, Japan Atomic Energy Research Institute. The system consists of a photocathode rf gun and a race-track microtron (RTM). To combine these two components, injection and extraction beamlines are designed employing transfer matrix and compute codes. A present status of the accelerator system is also presented. (author)

  17. The effects of strategic decision making structure and computerization on organizational performance

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, J.J. [Florida State Univ., Tallahassee (United States); Carter, N.M. [Marquette Univ., Milwaukee, WI (United States); Cullen, J.B. [Washington State Univ., Pullman, WA (United States)

    1993-12-31

    This study investigates whether the fit between computerization and strategic decision making predicts organizational performance. Results suggest that increased computerization along with the decentralization of strategic decisions leads to greater performance than increased computer usage and the centralization of strategic decisions. Findings indicate that the effects from a manager`s decision to computerize operations and change the organization`s strategic decision making structure are not immediately felt. These findings support the idea that performance needs to be measured on a long-term basis rather than on a short-term basis. The results also suggest that it may be erroneous to evaluated managers strictly on a short-term basis since the effects from strategic decisions may not be fully felt for one or more years. 64 refs.

  18. Systems 2020: Strategic Initiative

    Science.gov (United States)

    2010-08-29

    other services. Large companies use Manufacturing Enterprise Resource Planning ( ERP ) systems that are highly integrated and difficult to maintain and...www.informationweek.com/news/government/cloud- saas /showArticle.jhtml?articleID=225200398&queryText=cloud%20computing Accessed on June 25, 2010. FINAL

  19. Accelerated large-scale multiple sequence alignment

    Directory of Open Access Journals (Sweden)

    Lloyd Scott

    2011-12-01

    Full Text Available Abstract Background Multiple sequence alignment (MSA is a fundamental analysis method used in bioinformatics and many comparative genomic applications. Prior MSA acceleration attempts with reconfigurable computing have only addressed the first stage of progressive alignment and consequently exhibit performance limitations according to Amdahl's Law. This work is the first known to accelerate the third stage of progressive alignment on reconfigurable hardware. Results We reduce subgroups of aligned sequences into discrete profiles before they are pairwise aligned on the accelerator. Using an FPGA accelerator, an overall speedup of up to 150 has been demonstrated on a large data set when compared to a 2.4 GHz Core2 processor. Conclusions Our parallel algorithm and architecture accelerates large-scale MSA with reconfigurable computing and allows researchers to solve the larger problems that confront biologists today. Program source is available from http://dna.cs.byu.edu/msa/.

  20. An Innovative Method for Evaluating Strategic Goals in a Public Agency: Conservation Leadership in the U.S. Forest Service

    Science.gov (United States)

    David N. Bengston; David P. Fan

    1999-01-01

    This article presents an innovative methodology for evaluating strategic planning goals in a public agency. Computer-coded content analysis was used to evaluate attitudes expressed in about 28,000 on-line news media stories about the U.S. Department of Agriculture Forest Service and its strategic goal of conservation leadership. Three dimensions of conservation...

  1. 导航卫星速度和加速度的计算方法及精度分析%Navigation Satellites Velocity and Acceleration Computation. Methods and Accuracy Analysis

    Institute of Scientific and Technical Information of China (English)

    李显; 吴美平; 张开东; 曹聚亮; 黄杨明

    2012-01-01

    A systemic analysis of the different methods to calculate the velocities and accelerations of the navigation satellites is made, including O the closed analytical method based on broadcast elghemeris (1) the numerical differencing method based on position series of the satellite Q the analytical differencing method based on position series of the satellite. Firstly, the analytical expressions are deduced based on broadcast ephemeris, three types of broadcast ephemeris, including Kepler elements, GE~, and position-velocity type are discussed. The results can be drawn from precision comparison as follows, ~ the accuracy of velocity and acceleration derived from broadcast ephemeris is relative low, and can not match the high precision applications, such as airborne gravimetric measurement~ Q the acceleration accuracy is higher derived from position-velocity broadcast ephemeris while the Kepler type has higher velocity accuracy (3) the orbit height is one of the factors of the computation precision. Then, the analytical differencing and numerical differencing based on precision ephemeris to derive velocities and acceleration are analyzed and compared, the results shows that although the analytical method has advantages on efficient, the velocities computation precision is lower for the orbit analytical model built from short term position series is inaccurate, however, the acceleration computation precision is compared to the numerical differencing method. Finally, a static experiment is conducted which data from two CQRS (continues operational reference system) stations to evaluate and compare the computation accuracy among the methods mentioned above.%系统分析和总结基于广播星历和精密星历的导航卫星速度和加速度的计算方法,包括:①基于广播星历的公式法;②基于导航卫星位置序列的数值差分法;③基于导航卫星位置序列的解析差分法。首先在基于广播星历的公式法中,推导出Kepler

  2. Computational thinking and thinking about computing.

    Science.gov (United States)

    Wing, Jeannette M

    2008-10-28

    Computational thinking will influence everyone in every field of endeavour. This vision poses a new educational challenge for our society, especially for our children. In thinking about computing, we need to be attuned to the three drivers of our field: science, technology and society. Accelerating technological advances and monumental societal demands force us to revisit the most basic scientific questions of computing.

  3. VLHC accelerator physics

    Energy Technology Data Exchange (ETDEWEB)

    Michael Blaskiewicz et al.

    2001-11-01

    A six-month design study for a future high energy hadron collider was initiated by the Fermilab director in October 2000. The request was to study a staged approach where a large circumference tunnel is built that initially would house a low field ({approx}2 T) collider with center-of-mass energy greater than 30 TeV and a peak (initial) luminosity of 10{sup 34} cm{sup -2}s{sup -1}. The tunnel was to be scoped, however, to support a future upgrade to a center-of-mass energy greater than 150 TeV with a peak luminosity of 2 x 10{sup 34} cm{sup -2} sec{sup -1} using high field ({approx} 10 T) superconducting magnet technology. In a collaboration with Brookhaven National Laboratory and Lawrence Berkeley National Laboratory, a report of the Design Study was produced by Fermilab in June 2001. 1 The Design Study focused on a Stage 1, 20 x 20 TeV collider using a 2-in-1 transmission line magnet and leads to a Stage 2, 87.5 x 87.5 TeV collider using 10 T Nb{sub 3}Sn magnet technology. The article that follows is a compilation of accelerator physics designs and computational results which contributed to the Design Study. Many of the parameters found in this report evolved during the study, and thus slight differences between this text and the Design Study report can be found. The present text, however, presents the major accelerator physics issues of the Very Large Hadron Collider as examined by the Design Study collaboration and provides a basis for discussion and further studies of VLHC accelerator parameters and design philosophies.

  4. Managing transdisciplinarity in strategic foresight

    DEFF Research Database (Denmark)

    Rasmussen, Birgitte; Andersen, Per Dannemand; Borch, Kristian

    2010-01-01

    Strategic foresight deals with the long term future and is a transdisciplinary exercise which, among other aims, addresses the prioritization of science and other decision making in science and innovation advisory and funding bodies. This article discusses challenges in strategic foresight...... in relation to transdisciplinarity based on empirical as well as theoretical work in technological domains. By strategic foresight is meant future oriented, participatory consultation of actors and stakeholders, both within and outside a scientific community. It therefore allows multiple stakeholders...... to negotiate over how to attain a desirable future. This requires creative thinking from the participants, who need to extend their knowledge into the uncertainty of the future. Equally important is skilled facilitating in order to create a space for dialogue and exploration in a contested territory. Although...

  5. Strategic planning in healthcare organizations.

    Science.gov (United States)

    Rodríguez Perera, Francisco de Paula; Peiró, Manel

    2012-08-01

    Strategic planning is a completely valid and useful tool for guiding all types of organizations, including healthcare organizations. The organizational level at which the strategic planning process is relevant depends on the unit's size, its complexity, and the differentiation of the service provided. A cardiology department, a hemodynamic unit, or an electrophysiology unit can be an appropriate level, as long as their plans align with other plans at higher levels. The leader of each unit is the person responsible for promoting the planning process, a core and essential part of his or her role. The process of strategic planning is programmable, systematic, rational, and holistic and integrates the short, medium, and long term, allowing the healthcare organization to focus on relevant and lasting transformations for the future. Copyright © 2012 Sociedad Española de Cardiología. Published by Elsevier Espana. All rights reserved.

  6. NATO's Strategic Partnership with Ukraine

    DEFF Research Database (Denmark)

    Breitenbauch, Henrik Ø.

    2014-01-01

    Russian actions in Ukraine have altered the security land- scape in Europe, highlighting a renewed emphasis on the differences between members and non-members. In this context, NATO must a) create a strategic understanding of partnerships as something that can be transformative, even if it will n......Russian actions in Ukraine have altered the security land- scape in Europe, highlighting a renewed emphasis on the differences between members and non-members. In this context, NATO must a) create a strategic understanding of partnerships as something that can be transformative, even...... if it will not lead to membership in the short or even long term, and b) build such a strategic relationship with Ukraine. In sum, the Russian-induced Ukraine crisis should spur the reform of NATO partnerships – with Ukraine as a case in point....

  7. Limited rationality and strategic interaction

    DEFF Research Database (Denmark)

    Fehr, Ernst; Tyran, Jean-Robert

    2008-01-01

    Much evidence suggests that people are heterogeneous with regard to their abilities to make rational, forward-looking decisions. This raises the question as to when the rational types are decisive for aggregate outcomes and when the boundedly rational types shape aggregate results. We examine...... this question in the context of a long-standing and important economic problem: the adjustment of nominal prices after an anticipated monetary shock. Our experiments suggest that two types of bounded rationality-money illusion and anchoring-are important behavioral forces behind nominal inertia. However......, depending on the strategic environment, bounded rationality has vastly different effects on aggregate price adjustment. If agents' actions are strategic substitutes, adjustment to the new equilibrium is extremely quick, whereas under strategic complementarity, adjustment is both very slow and associated...

  8. The Test of Strategic Culture

    DEFF Research Database (Denmark)

    Dalgaard-Nielsen, Anja

    2005-01-01

    Germany was the first country to issue a categorical refusal to support the US-led war in Iraq. Some have interpreted this as the result of a clash between the strategic cultures of Germany and the USA, others as a sign that a more nationalistic and assertive Germany is emerging. This article...... explains the apparently contradictory aspects of Germany’s stance on Iraq by identifying two competing strands within Germany’s strategic culture. It concludes that the German refusal signals neither a reversion to a pacifist stance nor that Germany is in a process of shedding the bonds and alliances...... that have so far framed the reunified Germany’s military policy. Iraq simply showed that Germany, like most other countries, has conditions that have to be met – in Germany’s case, conditions flowing from the coexistence of two competing schools of thought within Germany’s strategic culture....

  9. Likelihood Analysis of the Local Group Acceleration

    CERN Document Server

    Schmoldt, I M; Teodoro, L; Efstathiou, G P; Frenk, C S; Keeble, O; Maddox, S J; Oliver, S; Rowan-Robinson, M; Saunders, W J; Sutherland, W; Tadros, H; White, S D M

    1999-01-01

    We compute the acceleration on the Local Group using 11206 IRAS galaxies from the recently completed all-sky PSCz redshift survey. Measuring the acceleration vector in redshift space generates systematic uncertainties due to the redshift space distortions in the density field. We therefore assign galaxies to their real space positions by adopting a non-parametric model for the velocity field that solely relies on the linear gravitational instability and linear biasing hypotheses. Remaining systematic contributions to the measured acceleration vector are corrected for by using PSCz mock catalogues from N-body experiments. The resulting acceleration vector points approx. 15 degrees away from the CMB dipole apex, with a remarkable alignment between small and large scale contributions. A considerable fraction of the measured acceleration is generated within 40 h-1 Mpc with a non-negligible contribution from scales between 90 and 140 h-1 Mpc after which the acceleration amplitude seems to have converged. The local...

  10. Hadron accelerators for radiotherapy

    Science.gov (United States)

    Owen, Hywel; MacKay, Ranald; Peach, Ken; Smith, Susan

    2014-04-01

    Over the last twenty years the treatment of cancer with protons and light nuclei such as carbon ions has moved from being the preserve of research laboratories into widespread clinical use. A number of choices now exist for the creation and delivery of these particles, key amongst these being the adoption of pencil beam scanning using a rotating gantry; attention is now being given to what technologies will enable cheaper and more effective treatment in the future. In this article the physics and engineering used in these hadron therapy facilities is presented, and the research areas likely to lead to substantive improvements. The wider use of superconducting magnets is an emerging trend, whilst further ahead novel high-gradient acceleration techniques may enable much smaller treatment systems. Imaging techniques to improve the accuracy of treatment plans must also be developed hand-in-hand with future sources of particles, a notable example of which is proton computed tomography.

  11. Contrasting strategic and Milan therapies.

    Science.gov (United States)

    MacKinnon, L

    1983-12-01

    Three related models of therapy are often grouped together as the strategic therapies. These are brief therapy model associated with the Mental Research Institute, approaches developed by Jay Haley and Cloë Madanes, and the model developed by the Milan associates. Controversy exists, however, as to whether the Milan model should be included as a strategic therapy. It appears that the similarities among the three models can mask deeper differences, thus confounding the confusion. This paper contrast the models in their development, theory, and practice.

  12. Final Draft Strategic Marketing Plan.

    Energy Technology Data Exchange (ETDEWEB)

    United States. Bonneville Power Administration.

    1994-02-01

    The Bonneville Power Administration (BPA) has developed a marketing plan to define how BPA can be viable and competitive in the future, a result important to BPA`s customers and constituents. The Marketing Plan represents the preferred customer outcomes, marketplace achievements, and competitive advantage required to accomplish the Vision and the Strategic Business Objectives of the agency. The Marketing Plan contributes to successful implementation of BPA`s Strategic Business Objectives (SBOs) by providing common guidance to organizations and activities throughout the agency responsible for (1) planning, constructing, operating, and maintaining the Federal Columbia River Power System; (2) conducting business with BPA`s customers; and (3) providing required internal support services.

  13. Issues in Strategic Decision Modelling

    CERN Document Server

    Jennings, Paula

    2008-01-01

    [Spreadsheet] Models are invaluable tools for strategic planning. Models help key decision makers develop a shared conceptual understanding of complex decisions, identify sensitivity factors and test management scenarios. Different modelling approaches are specialist areas in themselves. Model development can be onerous, expensive, time consuming, and often bewildering. It is also an iterative process where the true magnitude of the effort, time and data required is often not fully understood until well into the process. This paper explores the traditional approaches to strategic planning modelling commonly used in organisations and considers the application of a real-options approach to match and benefit from the increasing uncertainty in today's rapidly changing world.

  14. Characteristics of Useful and Practical Organizational Strategic Plans

    Science.gov (United States)

    Kaufman, Roger

    2014-01-01

    Most organizational strategic plans are not strategic but rather tactical or operational plans masquerading as "strategic." This article identifies the basic elements required in a useful and practical strategic plan and explains why they are important.

  15. Characteristics of Useful and Practical Organizational Strategic Plans

    Science.gov (United States)

    Kaufman, Roger

    2014-01-01

    Most organizational strategic plans are not strategic but rather tactical or operational plans masquerading as "strategic." This article identifies the basic elements required in a useful and practical strategic plan and explains why they are important.

  16. Piezoelectric particle accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Kemp, Mark A.; Jongewaard, Erik N.; Haase, Andrew A.; Franzi, Matthew

    2017-08-29

    A particle accelerator is provided that includes a piezoelectric accelerator element, where the piezoelectric accelerator element includes a hollow cylindrical shape, and an input transducer, where the input transducer is disposed to provide an input signal to the piezoelectric accelerator element, where the input signal induces a mechanical excitation of the piezoelectric accelerator element, where the mechanical excitation is capable of generating a piezoelectric electric field proximal to an axis of the cylindrical shape, where the piezoelectric accelerator is configured to accelerate a charged particle longitudinally along the axis of the cylindrical shape according to the piezoelectric electric field.

  17. Strategic Planning and Strategic Thinking Clothed in STRATEGO

    Science.gov (United States)

    Baaki, John; Moseley, James L.

    2011-01-01

    This article shares experiences that participants had playing the game of STRATEGO and how the activity may be linked to strategic planning and thinking. Among the human performance technology implications of playing this game are that gamers agreed on a framework for rules, took stock on where they wanted to go in the future, and generated a risk…

  18. Strategic Planning and Strategic Thinking Clothed in STRATEGO

    Science.gov (United States)

    Baaki, John; Moseley, James L.

    2011-01-01

    This article shares experiences that participants had playing the game of STRATEGO and how the activity may be linked to strategic planning and thinking. Among the human performance technology implications of playing this game are that gamers agreed on a framework for rules, took stock on where they wanted to go in the future, and generated a risk…

  19. The Strategic Data Project's Strategic Performance Indicators

    Science.gov (United States)

    Page, Lindsay C.; Fullerton, Jon; Bacher-Hicks, Andrew; Owens, Antoniya; Cohodes, Sarah R.; West, Martin R.; Glover, Sarah

    2013-01-01

    Strategic Performance Indicators (SPIs) are summary measures derived from parallel, descriptive analyses conducted across educational agencies. The SPIs are designed to inform agency management and efforts to improve student outcomes. We developed the SPIs to reveal patterns common across partner agencies, to highlight exceptions to those…

  20. The strategic labor allocation proces : a model of strategic HRM

    NARCIS (Netherlands)

    Bax, Erik H.

    2002-01-01

    In this article the Strategic Labor Allocation Process model (SLAP) is described. The model relates HR-strategies to structure, culture and task technology to HR-policies like recruitment, appraisal and rewarding, to business strategy and to socio-cultural, economic, institutional and technological